Skip to main content
main-content

Über dieses Buch

This book constitutes revised papers from the 12th International Conference on Large-Scale Scientific Computing, LSSC 2019, held in Sozopol, Bulgaria, in June 2019. The 70 papers presented in this volume were carefully reviewed and selected from 81 submissions. The book also contains two invited talks. The papers were organized in topical sections named as follows: control and optimization of dynamical systems; meshfree and particle methods; fractional diffusion problems: numerical methods, algorithms and applications; pore scale flow and transport simulation; tensors based algorithms and structures in optimization and applications; HPC and big data: algorithms and applications; large-scale models: numerical methods, parallel computations and applications; monte carlo algorithms: innovative applications in conjunctions with other methods; application of metaheuristics to large-scale problems; large scale machine learning: multiscale algorithms and performance guarantees; and contributed papers.

Inhaltsverzeichnis

Frontmatter

Correction to: Impact of Data Assimilation on Short-Term Precipitation Forecasts Using WRF-ARW Model

In the originally published version of chapter 30 an acknowledgement was missing. This has been corrected.

Evgeni Vladimirov, Reneta Dimitrova, Ventsislav Danchovski

Invited Papers

Frontmatter

First-Order System Least Squares Finite-Elements for Singularly Perturbed Reaction-Diffusion Equations

We propose a new first-order-system least squares (FOSLS) finite-element discretization for singularly perturbed reaction-diffusion equations. Solutions to such problems feature layer phenomena, and are ubiquitous in many areas of applied mathematics and modelling. There is a long history of the development of specialized numerical schemes for their accurate numerical approximation. We follow a well-established practice of employing a priori layer-adapted meshes, but with a novel finite-element method that yields a symmetric formulation while also inducing a so-called “balanced” norm. We prove continuity and coercivity of the FOSLS weak form, present a suitable piecewise uniform mesh, and report on the results of numerical experiments that demonstrate the accuracy and robustness of the method.

James H. Adler, Scott MacLachlan, Niall Madden

Big Data Analysis: Theory and Applications

With the continuous improvement of data processing capabilities and storage capabilities, Big Data Era has entered the public sight. Under such a circumstance, the generation of massive data has greatly facilitated the development of data mining algorithms. This paper describes the status of data mining and presents three of our works: optimization-based data mining, intelligent knowledge and the intelligence quotient of Artificial Intelligence respectively. Besides that, we also introduced some applications that have emerged in the context of big data. Furthermore, this paper indicates three potential directions for future research of big data analysis.

Yong Shi, Pei Quan

Control and Optimization of Dynamical Systems

Frontmatter

Solutions to the Hamilton-Jacobi Equation for Bolza Problems with State Constraints and Discontinuous Time Dependent Data

This paper concerns the characterization of the value function associated with a state constrained Bolza problem in which the data are allowed to be discontinuous w.r.t. the time variable on a set of zero measure and have everywhere left and right limits. Using techniques coming from viability theory and nonsmooth analysis, we provide a characterization of the value function as the unique solution to the Hamilton-Jacobi equation, in a generalized sense which employs the lower Dini derivative and the proximal normal vectors.

Julien Bernis, Piernicola Bettiol

Optimal Control Problem of a Metronomic Chemotherapy

In this paper we consider a metronomic chemotherapy model which is optimally controlled over the expected future lifetime of the particular patient. Under certain assumptions concerning the distribution of the future lifetime of the patient, it can be easily transformed to a purely deterministic optimal control problem with infinite horizon. To solve the latter the open source software package OCMat was used. Solutions to optimal control problems with $$L_2-$$ and regularized $$L_1-$$objective functionals have been compared.

Dieter Grass, Valeriya Lykina

Asymptotic Behaviour of Controllability Gramians and Convexity of Small-Time Reachable Sets

We study the convexity of reachable sets for a nonlinear control-affine system on a small time interval under integral constraints on control variables. The convexity of a reachable set for a nonlinear control system under integral constraints was proved by B. Polyak under assumption that linearization of the system is controllable and $$ L_2$$ norms of controls are bounded from above by a sufficiently small number. Using this result we propose sufficient conditions for the convexity of reachable sets of a control-affine system on a small time interval. These conditions are based on the estimates for the asymptotics of the minimal eigenvalue of controllability Gramian of the system linearization, which depends on a small parameter (a length of the interval). A procedure for calculating estimates using the expansion of the Gramian into a series with respect to the small parameter degrees is described and some illustrative examples are presented.

Mikhail Gusev

On the Regularity of Mayer-Type Affine Optimal Control Problems

The paper presents a sufficient condition for strong metric sub-regularity (SMsR) of the system of first order optimality conditions (optimality system) for a Mayer-type optimal control problem with a dynamics affine with respect to the control. The SMsR property at a reference solution means that any solution of the optimality system, subjected to “small” disturbances, which is close enough to the reference one is at a distance to it, at most proportional to the size of the disturbance. The property is well understood for problems satisfying certain coercivity conditions, which however, are not fulfilled for affine problems.

Nikolai P. Osmolovskii, Vladimir M. Veliov

Meshfree and Particle Methods

Frontmatter

Mesh-Hardened Finite Element Analysis Through a Generalized Moving Least-Squares Approximation of Variational Problems

In most finite element methods the mesh is used to both represent the domain and to define the finite element basis. As a result the quality of such methods is tied to the quality of the mesh and may suffer when the latter deteriorates. This paper formulates an alternative approach, which separates the discretization of the domain, i.e., the meshing, from the discretization of the PDE. The latter is accomplished by extending the Generalized Moving Least-Squares (GMLS) regression technique to approximation of bilinear forms and using the mesh only for the integration of the GMLS polynomial basis. Our approach yields a non-conforming discretization of the weak equations that can be handled by standard discontinuous Galerkin or interior penalty terms.

P. Bochev, N. Trask, P. Kuberry, M. Perego

An Adaptive LOOCV-Based Algorithm for Solving Elliptic PDEs via RBF Collocation

We present a new adaptive scheme for solving elliptic partial differential equations (PDEs) through a radial basis function (RBF) collocation method. Our adaptive algorithm is meshless and it is characterized by the use of an error indicator, which depends on a leave-one-out cross validation (LOOCV) technique. This approach allows us to locate the areas that need to be refined, also including the chance to add or remove adaptively any points. The algorithm turns out to be flexible and effective by means of a good interaction between error indicator and refinement procedure. Numerical experiments point out the performance of our scheme.

R. Cavoretto, A. De Rossi

Adaptive Refinement Techniques for RBF-PU Collocation

We propose new adaptive refinement techniques for solving Poisson problems via a collocation radial basis function partition of unity (RBF-PU) method. As the construction of an adaptive RBF-PU method is still an open problem, we present two algorithms based on different error indicators and refinement strategies that turn out to be particularly suited for a RBF-PU scheme. More precisely, the first algorithm is characterized by an error estimator based on the comparison of two collocation solutions evaluated on a coarser set and a finer one, while the second one depends on an error estimate that is obtained by a comparison between the global collocation solution and the associated local RBF interpolant. Numerical results support our study and show the effectiveness of our algorithms.

R. Cavoretto, A. De Rossi

Fractional Diffusion Problems: Numerical Methods, Algorithms and Applications

Frontmatter

A Second Order Time Accurate Finite Volume Scheme for the Time-Fractional Diffusion Wave Equation on General Nonconforming Meshes

SUSHI (Scheme Using Stabilization and Hybrid Interfaces) is a finite volume method has been developed at the first time to approximate heterogeneous and anisotropic diffusion problems. It has been applied later to approximate several types of partial differential equations. The main feature of SUSHI is that the control volumes can only be assumed to be polyhedral. Further, a consistent and stable Discrete Gradient is developed.In this note, we establish a second order time accurate implicit scheme for the TFDWE (Time Fractional Diffusion-Wave Equation). The space discretization is based on the use of SUSHI whereas the time discretization is performed using a uniform mesh. The scheme is based on the use of an equivalent system of two low order equations. We sketch the proof of the convergence of the stated scheme. The convergence is unconditional. This work is an improvement of [3] in which a first order scheme, whose convergence is conditional, is established.

Fayssal Benkhaldoun, Abdallah Bradji

Identification of a Time-Dependent Right-Hand Side of an Unsteady Equation with a Fractional Power of an Elliptic Operator

An inverse problem of identifying the right-hand side is considered for an unsteady equation with a fractional power of the elliptic operator. We consider the case when the time-dependent right-hand side is unknown. The redefinition (additional information) is associated with the known solution at an internal point (points) of the computational domain. The computational algorithm is based on a special decomposition of the solution of the unsteady problem during a transition from the previous time level to the next one. The related auxiliary problems are direct boundary value problems for stationary equations with fractional powers of elliptic operators. Some features of the proposed computational algorithm are demonstrated by the results of numerical experiments for a model 2D inverse problem.

Petr N. Vabishchevich

Pore Scale Flow and Transport Simulation

Frontmatter

Computational Identification of Adsorption and Desorption Parameters for Pore Scale Transport in Random Porous Media

Reactive flow at pore scale in random porous media is considered at low Pecklet and Damkoler numbers, and computational identification of unknown adsorption and desorption rates is discussed. The reactive transport is governed by steady state Stokes equations, coupled with convection-diffusion equation for species transport. The surface reactions, namely adsorption and desorption, are accounted via Robin boundary condition. Finite element approximation in space and implicit time discretization are exploited. Measured concentration of the specie at the outlet of the domain is provided to carry out the identification procedure. The impact of the noise in the measurement on the parameter identification procedure is studied. Stochastic parameter identification approach is adopted. Computational results demonstrating the potential of the considered parameter identification approaches are presented.

Vasiliy V. Grigoriev, Oleg Iliev, Petr N. Vabishchevich

Weighted Time-Semidiscretization Quasilinearization Method for Solving Rihards’ Equation

This paper concerns efficient $$\sigma $$ - weighted ($$0<\sigma <1$$) time-semidiscretization quasilinearization technique for numerical solution of Richards’ equation. We solve the classical and a new $$\alpha $$ - time-fractional ($$0<\alpha <1$$) equation, that models anomalous diffusion in porous media. High-order approximation of the $$\alpha =2(1-\sigma )$$ fractional derivative is applied. Numerical comparison results are discussed.

Miglena N. Koleva, Lubin G. Vulkov

Tensors Based Algorithms and Structures in Optimization and Applications

Frontmatter

On the Problem of Decoupling Multivariate Polynomials

In this paper we address the application properties of the decoupling multivariate polynomials problem algorithm proposed in [2]. By numerous examples we demonstrate that this algorithm, unfortunately, fails to provide a solution in some cases. Therefore we empirically determine the application scope of this algorithm and show that it is connected with the uniqueness conditions of the CP-decomposition (Canonical Polyadic Decomposition). We also investigate the approximation properties of this algorithm and show that it is capable of construction the best low-rank polynomial approximation provided that the CP-decomposition is unique.

Stanislav Morozov, Dmitry A. Zheltkov, Nikolai Zamarashkin

Model Order Reduction Algorithms in the Design of Electric Machines

Although model order reduction techniques based on searching for a solution in a low-rank subspace are researched well for the case of linear differential equations, it is still questionable if such model order reduction techniques would work well for nonlinear PDEs. In this work, model order reduction via POD-DEIM (Proper Orthogonal Decomposition via Discrete Empirical Interpolation) method is applied to a particular nonlinear parametric PDE that is used for the modeling of electric machines. The idea of the POD-DEIM algorithm is to use statistical data about ‘typical solutions’ that correspond to ‘typical’ parameter values, to approximate solutions for other parameter values. Practical POD-DEIM application to the particular PDE has met a number of difficulties, and several improvements to the selection of initial approximation, selection of interpolation nodes, selection of interpolation basis and handling moving physical entities were made to make the method to work. These improvements, along with some numerical experiments, are presented.

Sergey Petrov

Discrete Vortices and Their Generalizations for Scattering Problems

In the numerical solving of boundary integral equations of electrodynamics, the problem is reduced to a system of linear algebraic equations with dense matrix. A significant increase in the number of cells of the partition used for the solution can be achieved by applying special methods for compressing dense matrices and fast matrix algorithms. At the same time, the extension of the classes of solved wave scattering problems requires the progress in the of the boundary integral equations method. The problem of electromagnetic diffraction in a piecewise homogeneous medium that can consist of domains with different dielectric properties and can contain ideally conducting inclusions in the form of solid objects and screens is considered in the present work. The proposed numerical method for solving of boundary hypersingular integral equations is based on ideas borrowed from the vortex frame method, widely used in computational aerodynamics.

Alexey Setukha

Nonnegative Tensor Train Factorizations and Some Applications

Nowadays as the amount of available data grows, the problem of managing information becomes more difficult. In many applications data can be represented as a multidimensional array. However, in the big data case and as well as when we aim at discovering some structure in the data, we are often interested to construct some low-rank tensor approximations, for instance, using tensor train (TT) decomposition. If the original data is nonnegative, we may be interested to guarantee that an approximant keeps this property. Nonnegative tensor train factorization is an utterly nontrivial task when we cannot afford to see each data element because it may be too expensive in the case of big data.A natural solution is to build tensor trains with all carriages (cores) to be nonnegative. This means that skeleton decompositions (approximations) have to be constructed nonnegative. Nonnegative factorizations can be used as models for recovering suitable structures in data, e.g., in machine learning and image processing tasks. In this work we suggest a new method for nonnegative tensor train factorizations, estimate its accuracy and give numerical results for different problems.

Elena Shcherbakova, Eugene Tyrtyshnikov

Low Rank Structures in Solving Electromagnetic Problems

Hypersingular integral equations are applied in various areas of applied mathematics and engineering. The paper presents a method for solving the problem of diffraction of an electromagnetic wave on a perfectly conducting object of complex form. In order to solve the problem of diffraction with large wave numbers using the method of integral equations, it is necessary to calculate a large dense matrix.In order to solve the integral equation, the author used low-rank approximations of large dense matrices. The low-rank approximation method allows multiplying a matrix of size $$N\times N$$ by a vector of size N in $$\mathcal {O}(N\log (N))$$ operations instead of $$\mathcal {O}(N^2)$$. An iterative method (GMRES) is used to solve a system with a large dense matrix represented in a low-rank format, using fast matrix-vector multiplication.In the case of a large wave number, the matrix becomes ill-conditioned; therefore, it is necessary to use a preconditioner to solve the system with such a matrix. A preconditioner is constructed using the uncompressed matrix blocks of a low-rank matrix representation in order to reduce the number of iterations in the GMRES method. The preconditioner is a sparse matrix. The MUMPS package is used in order to solve system with this sparse matrix on high-performance computing systems.

Stanislav Stavtsev

Tensors in Modelling Multi-particle Interactions

In this work we present recent results on application of low-rank tensor decompositions to modelling of aggregation kinetics taking into account multi-particle collisions (for three and more particles). Such kinetics can be described by system of nonlinear differential equations with right-hand side requiring $$N^D$$ operations for its straight-forward evaluation, where N is number of particles’ size classes and D is number of particles colliding simultaneously. Such a complexity can be significantly reduced by application low rank tensor decompositions (either Tensor Train or Canonical Polyadic) to acceleration of evaluation of sums and convolutions from right-hand side. Basing on this drastic reduction of complexity for evaluation of right-hand side we further utilize standard second order Runge-Kutta time integration scheme and demonstrate that our approach allows to obtain numerical solutions of studied equations with very high accuracy in modest times. We also show preliminary results on parallel scalability of novel approach and conclude that it can be efficiently utilized with use of supercomputers.

Daniil A. Stefonishin, Sergey A. Matveev, Dmitry A. Zheltkov

Tensorisation in the Solution of Smoluchowski Type Equations

We investigate the structure of the non-linear operator featured in the Smoluchowski-type system of ordinary differential equations, and find a way to express it algebraically in terms of the parameters of the problem and a few auxiliary tensors, describing, in a sense, the “shape” of the system. We find compact representations of these auxiliary tensors in terms of a Tensor Train decomposition. Provided the parameters admit a compact representation in this format as well, this allows us to rather straightforwardly reuse standard numerical algorithms for a wide range of associated problems, obtaining $$O(\log N)$$ asymptotic complexity.

Ivan Timokhin

On Tensor-Train Ranks of Tensorized Polynomials

Discretization followed by tensorization (mapping from low-dimensional to high-dimensional data) can be used to construct low-parametric approximations of functions. For example, a function f defined on [0, 1] may be mapped to a d-dimensional tensor $$A \in \mathbb {R}^{b\times \dots \times b}$$ with elements $$A(i_1,\dots ,i_d) = f(i_1b^{-1} + \dots + i_db^{-d})$$, $$i_k \in \{0,\dots ,b-1\}$$. The tensor A can now be compressed using one of the tensor formats, e.g. tensor train format. It has been noticed in practice that approximate TT-ranks of tensorizations of degree-n polynomials grow very slowly with respect to n, while the only known bound for them is $$n+1$$. In this paper we try to explain the observed effect. New bounds of the described TT-ranks are proved and shown experimentally to quite successfully capture the observed distribution of ranks.

Lev Vysotsky

Global Optimization Algorithms Using Tensor Trains

Global optimization problem arises in a huge amount of applications including parameter estimation of different models, molecular biology, drug design and many others. There are several types of methods for this problem: deterministic, stochastic, heuristic and metaheuristic. Deterministic methods guarantee that found solution is the global optima, but complexity of such methods allows to use them only for problems of relatively small dimensionality, simple functional and area of optimization.Non-deterministic methods are based on some simple models of stochastic, physical, biological and other processes. On practice such methods are often much faster then direct methods. But for the most of them there is no proof of such fast convergence even for some simple cases.In this paper we consider global optimization method based on tensor train decomposition. The method is non-deterministic and exploits tensor structure of functional. Theoretical results proving its fast convergence in some simple cases to global optimum are provided.

Dmitry A. Zheltkov, Alexander Osinsky

Application of the Global Optimization Methods for Solving the Parameter Estimation Problem in Mathematical Immunology

Mathematical modeling is widely used in modern immunology. The availability of biologically meaningful and detailed mathematical models permits studying the complex interactions between the components of a biological system and predicting the outcome of the therapeutic interventions. However, the incomplete theoretical understanding of the immune mechanism leads to the uncertainty of model structure and the need of model identification. This process is iterative and each step requires data-based model calibration. When the model is highly detailed, the considerable part of model parameters can not be measured experimentally or found in literature, so one has to solve the parameter estimation problem. Using the maximum likelihood framework, the parameter estimation leads to minimization problem for least square functional, when the observational errors are normally distributed. In this work we presented different computational approaches to the treatment of global optimization problem, arising in parameter estimation. We consider two high-dimensional mathematical models of HIV (human immunodeficiency virus)-infection dynamics as examples. The ODE (ordinary differential equations) and DDE (delay differential equations) versions of models were studied. For these models we solved the parameter estimation problem using a number of numerical global optimization techniques, including the optimization method, based on the tensor-train decomposition (TT). The comparative analysis of obtained results showed that the TT-based optimization technique is in the leading group of the methods ranked according to their performance in the parameter estimation for ODE and DDE versions of both models.

V. V. Zheltkova, Dmitry A. Zheltkov, G. A. Bocharov, Eugene Tyrtyshnikov

HPC and Big Data: Algorithms and Applications

Frontmatter

Statistical Moments of the Vertical Distribution of Air Pollution over Bulgaria

The air quality has a key impact on the quality of life and human health. The atmospheric composition fields are formed as a result of complex interaction of processes with different temporal and spatial scales from global to regional to a chain of local scales. The earth surface heterogeneities, including the complex terrain, have a very significant impact on the atmospheric dynamics, hence on the formation of air pollution pattern. The incredible diversity of dynamic processes, the complex chemical transformations of the compounds and complex emission configuration together lead to the formation of a complex vertical structure of the atmospheric composition. The detailed analysis of this vertical structure with its temporal/spatial variability jointly with the atmospheric dynamics characteristics can enrich significantly the knowledge about the processes and mechanisms, which form air pollution, including near earth surface. The present paper presents some results of a study, which aims at performing reliable, comprehensive and detailed analysis of the atmospheric composition fields 3D structure and its connection with the processes, which lead to their formation.The numerical simulations of the vertical structure of atmospheric composition fields over Bulgaria have been performed using the US EPA Model–3 system as a modelling tool for 3D simulations and the system nesting capabilities were applied for downscaling the simulations from 81 km to 9 km grid resolution over Bulgaria. The national emission inventory was used as an emission input for Bulgaria, while outside the country the emissions are from the TNO high resolution inventory.

Georgi Gadzhev, Kostadin Ganev, Plamen Mukhtarov

Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication

With the increased usage of deep neural networks, their structures have naturally evolved, increasing in size and complexity. With currently used networks often containing millions of parameters, and hundreds of layers, there have been many attempts to leverage the capabilities of various high-performance computing architectures. Most approaches are focused on either using parameter servers or a fixed communication network, or exploiting particular capabilities of specific computational resources. However, few experiments have been made under relaxed communication consistency requirements and using a dynamic adaptive way of exchanging information.Gossip communication is a peer-to-peer communication approach, that can minimize the overall data traffic between computational agents, by providing a weaker guarantee on data consistency - eventual consistency. In this paper, we present a framework for gossip-based communication, suitable for heterogeneous computing resources, and apply it to the problem of parallel deep learning, using artificial neural networks. We present different approaches to gossip-based communication in a heterogeneous computing environment, consisting of CPUs and MIC-based co-processors, and implement gossiping via both shared and distributed memory. We also provide a simplistic approach to load balancing in a heterogeneous computing environment, that proves efficient for the case of parallel deep neural network training.Further, we explore several approaches to communication exchange and resource allocation, when considering parallel deep learning using heterogeneous computing resources, and evaluate their effect on the convergence of the distributed neural network.

Dobromir Georgiev, Todor Gurov

Process Analysis of Atmospheric Composition Fields in Urban Area (Sofia City)

The air pollution pattern is formed as a result of interaction of different processes, so knowing the contribution of each one of the processes for different meteorological conditions and given emission spatial configuration and temporal profiles could be helpful for understanding the atmospheric composition and air pollutants behavior. Analysis of the contribution of these different processes (chemical and dynamical) which form the atmospheric composition in chosen region will be demonstrated in the present paper. To analyze the contribution of different dynamic and chemical processes for the air pollution formation over Sofia the CMAQ Integrated Process Rate Analysis option was applied. The procedure allows the concentration change for each compound to be presented as a sum of the contribution of each one of the processes, which determine the air pollution concentration. A statistically robust ensemble of the atmospheric composition over Sofia, taking into account the two-way interactions of local to urban scale and tracking the main pathways and processes, which lead to different scale atmospheric composition formation should be constructed in order to understand the atmospheric composition climate and air pollutants behavior.On the basis of 3D modeling tools an extensive data base was created and this data was used for different studies of the atmospheric composition, carried out with good resolution using up-to-date modeling tools and detailed and reliable input data. All the simulations were based on the US EPA (Environmental Protection Agency) Model–3 system for the 7 years period (2008 to 2014). The modeling system consists of 3 models, meteorological pre–processor, the emission pre–processor SMOKE and Chemical Transport Model (CTM) CMAQ.

Ivelina Georgieva, Georgi Gadzhev, Kostadin Ganev, Nikolay Miloshev

Modeling of PM10 Air Pollution in Urban Environment Using MARS

In the modern world, attention is increasingly drawn to the pressing problem of atmospheric air pollution, which is a serious threat to human health. Worldwide, China, India, Indonesia and some of the countries in Europe, including Bulgaria, are the most polluted countries. To help solve these issues, a very large number of scientific studies have been devoted, including the study, analysis and forecasting of atmospheric air pollution with particulate matter PM10. In this study the PM10 concentrations in the town of Smolyan, Bulgaria are examined and mathematical models with high performance for prediction and forecasting depending on weather conditions are developed. For this purpose, the powerful method of multivariate adaptive regression splines (MARS) is implemented. The examined data cover a period of 9 years - from 2010 to 2018, on a daily basis. As independent variables, 7 meteorological factors are used - minimum and maximum daily temperatures, wind speed and direction, atmospheric pressure, etc. Additional predictors also used are lagged PM10 and meteorological variables with a delay of 1 day. Three time variables are included to account for time. Multiple models are created with interactions between predictors up to the 4th order. The obtained best MARS models fit to over 80% of measured data. The models are used to forecast PM10 concentrations for 7 days ahead of time. This approach could be applied for real predictions and development of computer and mobile applications.

Snezhana G. Gocheva-Ilieva, Atanas V. Ivanov, Desislava S. Voynikova, Maya P. Stoimenova

Application of Orthogonal Polynomials and Special Matrices to Orthogonal Arrays

Special matrices are explored in many areas of science and technology. Krawtchouk matrix is such a matrix that plays important role in coding theory and theory of orthogonal arrays also called fractional factorial designs in planning of experiments and statistics. In this paper we give explicitly Smith normal forms of Krawtchouk matrix and its extended matrix. Also we propose a computationally effective method for determining the Hamming distance distributions of an orthogonal array with given parameters. The obtained results facilitate the solving of many existence and classification problems in theory of codes and orthogonal arrays.

Nikolai L. Manev

Performance Effects of Running Container-Based Open-MPI Cluster in Public Cloud

The vast majority of HPC users are heavily leveraging MPI middleware for their applications needs. For almost two decades, providing an infrastructure that constantly answers the increasing demand, security standards, software interoperability, and the ineffective resources utilization are the challenges placing HPC administrators between the overhead of the virtualization and the manual tuning of the performance pick, forcing them to find the edge of the consensus on their own.Recently, developments like Linux Containers, Infrastructure as Code, Public Cloud opened a new horizon in front of the industry, by redefining the engineering roles and providing tools and practices for solving some major caveats from the past.This paper presents three architectures for setting up Open-MPI cluster in a Linux Container-Based environment and explains how these architectures can be implemented across private and public cloud with the main focus of the performance effects in the public cloud.

Teodor Simchev, Emanouil Atanassov

Impact of Data Assimilation on Short-Term Precipitation Forecasts Using WRF-ARW Model

In spite of efforts made by the scientific community during the last decades on the weather forecast improving, prediction of precipitation systems and fogs is still considered to be a difficult challenge. The main reason for the difficulties in prediction of these phenomena is the complexity of their formation, such as orography dependence, spatio-temporal inhomogeneity of land use and large scale synoptic conditions. Remote sensing and in-situ data assimilation have been applied to a number of studies in recent years, demonstrating significant improvements of the model results.The objective of this study is to evaluate the performance of Weather Research and Forecasting (WRF) model, and assess the improvement in the short-term precipitation forecast, using high-resolution data assimilation of satellite and in-situ measurements. The study case is specific weather phenomenon for the Eastern parts of Balkan Peninsula - passing winter Mediterranean cyclone causing excessive amounts of rainfall in Bulgaria. A three-dimensional variational (3D-Var) data assimilation system is used in this study. The model results obtained using or not data assimilation procedure, are compared to demonstrate the impact of this method on the start time of precipitation, rainfall spacial distribution and amount.

Evgeni Vladimirov, Reneta Dimitrova, Ventsislav Danchovski

Large-Scale Models: Numerical Methods, Parallel Computations and Applications

Frontmatter

An Introduction and Summary of Use of Optimal Control Methods for PDE’s

In optimal control formulations of partial differential equations the aim is to find a control function that steers the solution to a desired form. A Lagrange multiplier, i.e. an adjoint variable is introduced to handle the PDE constraint. One can reduce the problem to a two-by-two block matrix form with square blocks for which a very efficient preconditioner, PRESB can be applied. This method gives sharp and tight eigenvalue bounds, which hold uniformly with respect to regularization, mesh size and problem parameters, and enable use of the second order inner product free Chebyshev iteration method, which latter enables implementation on parallel computers without any need to use global data communications. Furthermore this method is insensitive to round-off errors. It outperforms other earlier published methods. Implementational and spectral properties of the method, and a short survey of applications, are given.

Owe Axelsson

Parallel BURA Based Numerical Solution of Fractional Laplacian with Pure Neumann Boundary Conditions

The study is motivated by the increased usage of fractional Laplacian in the modeling of nonlocal problems like anomalous diffusion. We present a parallel numerical solution method for the nonlocal elliptic problem: $$-\varDelta ^\alpha u = f$$, $$0<\alpha < 1$$, $$-\partial u(x)/\partial n=g(x)$$ on $$\partial \varOmega $$, $$\varOmega \subset \mathrm{I\!R}^d$$. The Finite Element Method (FEM) is used for discretization leading to the linear system $$A^\alpha {\mathbf{u}} = {\mathbf{f}}$$, where A is a sparse symmetric and positive semidefinite matrix. The implemented method is based on the Best Uniform Rational Approximation (BURA) of degree k, $$r_{\alpha ,k}$$, of the scalar function $$t^{\alpha }$$, $$0\le t \le 1$$. The related approximation of $$A^{-\alpha }{\mathbf{f}}$$ can be written as a linear combination of the solutions of k local problems. The latter are found using the preconditioned conjugate gradient method. The method is applicable to computational domains with general geometry. Linear finite elements on unstructured tetrahedral meshes with local refinements are used in the presented numerical tests. The behavior of the relative error, the number of Preconditioned Conjugate Gradient (PCG) iterations, and the parallel time is analyzed varying the parameter $$\alpha \in \{0.25, 0.50, 0.75\}$$, the BURA degree $$k \in \{5, 6,\dots ,12\}$$, and the mesh size.

Gergana Bencheva, Nikola Kosturski, Yavor Vutov

Bias Correcting of Selected ETCCDI Climate Indices for Projected Future Climate

Regional climate models (RCMs) have been developed and extensively applied in the recent decades for dynamically downscaling coarse resolution information from different sources, for various purposes, including past climate simulations and future climate projections. Due to systematic and random model errors, however, RCM simulations often show considerable deviations from observations. This has led to the development of a number of correction approaches, which lately become known with the common name bias correction. Although some criticism exists, the general view in the expert community is that the bias-corrected climate change signal is more reliable compared to the uncorrected one and thus is more suitable for impact assessments. In the present study, which is part of more general work, outputs from simulations for the present-day (1961–1990) climate, as well as for the near future scenario (2021–2050), performed with the model ALADIN-Climate, are used for calculation of selected ETCCDI climate indices. The same subset, but based on observational database E-OBS, is taken from the open archive ClimData and used as reference. The results of the computations, performed over the interior of the Balkan Peninsula, demonstrates the possibilities of the selected bias correction technique and its modifications.

Hristo Chervenkov, Valery Spiridonov

Project for an Open Source GIS Tool for Visualization of Flood Risk Analysis After Mining Dam Failures

Under DG ECHO funded project with acronym: ALTER there has been initiated an effort to support the Armenian Ministry of Emergency Situations in establishment of public-private partnerships to understand and address flood risks that may occur after mining dam failures. The project focus on three pilot areas where dams and other activities present risks to local communities: Akhtala and Teghut areas of Lori Marz along the Shamlugh river; the Vorotan Cascade and its associated dams in the Syunik region; and the Voghji river basin of Syunik region. In our article data collection, analysis and the results of dam break modelling for the Geghi reservoir and Geghanoush tailing dam located in Voghji river basin are presented. All collected data from hydro-meteorological sources, elevation, geologic, geomorphological and land use data have been processed in a way that Flood Hazard Index (FHI) Map of the studied area has been developed. This information is combined in GIS (Geographic Information System) layers. Those layers are being uploaded in specifically designed open source GIS tool in order to assist the end users on the field or in the operational room to rapidly assess the risks associated with flood occurred in a result of dam break and to better plan and visualize their activities.

Nina Dobrinkova, Alexander Arakelyan, Aghavni Harutyunyan, Sean Reynolds

Open Source GIS for Civil Protection Response in Cases of Wildland Fires or Flood Events

The article describes the capabilities of the open source GIS (Geographic Information Systems) and other Free and Open Source Softwares (FOSS) for building desktop application that can support firefighting and volunteer groups in cases of wildland fires or flood events reactions. The desktop application have two main modules as design. The first module is based on open source GIS software. The second is the server software, database and visualization environment for the application. The main goal of the tool is to visualize administrative and vulnerable objects and POI’s (Points of Interests) like logistic centres for water supplies, tools and supplies from where the firefighting and volunteer groups can take what they need on the field work. The idea is to be visualized detailed information about logistic centres and what kind of equipment is stored inside. This is needed, because there aren’t implemented modern ICT (Information and Communication Technologies) tools in the field work. The current situation is such that the groups are using instructions written on paper in most of the cases. Our article presents a desktop application that can be used on the field and in an operational rooms by firefighting and volunteer groups acting in cases of wildland fires or flood events. In our application different open source software solutions as Geoserver, Qgis, Web App Builder, Boundless WEBSDK, PostgreSQL and OpenLayers are used:Geoserver allows the user to display spatial information to the world;QGIS is a professional GIS (Geographic Information System) cross-platform application that is Free and Open Source Software (FOSS);Web App Builder is a plugin for QGIS that allows easy creation of web applications;Boundless WEBSDK which provides tools for easy-to-build JavaScript-based web mapping applications;PostgreSQL is a powerful, open source object-relational database system;OpenLayers is an open-source JavaScript library for displaying map data in web browsers.

Nina Dobrinkova, Stefan Stefanov

PDE-Constrained Optimization: Matrix Structures and Preconditioners

In this paper we briefly account for the structure of the matrices, arising in various optimal control problems, constrained by PDEs, and how it can be utilized when constructing preconditioners for the arising linear systems to be solved in the optimization framework.

Ivo Dravins, Maya Neytcheva

One Approach of Solving Tasks in the Presence of Free Surface Using a Multiprocessor Computing Systems

The task about motion of a pair of vortices under a free surface for different Froude numbers and the problem of free oscillations of fluids in a rectangular container are considered. It is assumed that the liquid is weakly compressible and homogeneous. Comparative analysis with analytical and numerical solutions obtained using incompressible approach in the author’s previous works. To solve the system of equations obtained in curvilinear coordinates with appropriate boundary and initial conditions the explicit scheme of second order approximation by the method CABARET is used. Also includes parallel version of the algorithm of calculation using Descartes cell decomposition. Evaluation of parallelization on supercomputing facility with distributed memory was performed. The results give way to further generalize this approach for solving problems with a free surface in a three-dimensional setting. The author’s plan to construct an effective method for investigation of a non homogeneous fluid flows through the further development of this approach. Such explicit techniques offer the possibility of efficient use of multiprocessor systems (clusters) for solving problems, which previously dominated by models of incompressible medium.

Valentin A. Gushchin, Vasilii G. Kondakov

In Silico Study on the Structure of Novel Natural Bioactive Peptides

Antimicrobial peptides (AMPs) are an abundant and diverse group of molecules produced by many tissues and cell types in a variety of invertebrate, plant and animal species in contact with infectious microorganisms. They play a crucial role as mediators of the primary host defense against microbial invasion. The characteristics, the broad spectrum and largely nonspecific activity of the antimicrobial peptides qualify them as possible candidates for therapeutic alternatives against multi-resistant bacterial strains.AMPs come in nature in the form of multicomponent secretory fluids that exhibit certain biological activity. For development of biologicals with some predesignated properties separation of the individual components, their purification and activity analysis are needed. In silico experiments are designed to speedup the identification of the active components in these substances, understanding of their structural specifics and biodynamics.Here we present the first results of a pilot in silico study on the primary structure formation of newly identified in the mucus of molluscs representatives peptides, as a prerequisite for understanding the possible role of complexation for their biological activity.

Nevena Ilieva, Peicho Petkov, Elena Lilkova, Tsveta Lazarova, Aleksandar Dolashki, Lyudmila Velkova, Pavlina Dolashka, Leandar Litov

Sensitivity of the Simulated Heat Risk in Southeastern Europe to the RegCM Model Configuration—Preliminary Results

The spatial distribution of the biometeorological conditions is a topic of many studies in different countries. One of the most important aspects of the weather adverse effect on the human beings is the consequences from too much exposure to the heat conditions. The human body can adapt to temperatures, but to some extent. If the air temperatures become too high, human beings at first feel uncomfortable, but the consequences can be a serious threat to health and even life. The main reasons for this threat are related to the lack of perspiration and cardiovascular problems. Atmospheric numerical models for simulating the heat stress is used in many studies. One of the most affected region in the near past, but also most likely in the future, is the Southeastern Europe, including Bulgaria. Global models are with too low resolution, but still they suggest very strong heat stress especially at the end of the 21th century. According to other studies, results from regional meteorological models suggest similar conclusions. The current research is about the heat stress conditions in the Balkan Peninsula, evaluated from ten–year simulations. They are performed with regional climate model RegCM. The model is run many times with different combinations of physics parameterization of some processes. The aim is to compare the heat stress simulated by different model configurations for the Balkan Peninsula and so to reveal the dependence of heat stress evaluation on the model configuration. That would answer the question of the sensitivity of the model to the parameterization schemes from a biometeorological point of view.

Vladimir Ivanov, Georgi Gadzhev, Kostadin Ganev, Hristo Chervenkov

Large-Scale Prediction of the ARS Family Inhibitors of the Oncogenic KRASG12C Mutant

The KRAS protein is a molecular switch that activates cellular processes, like cell growth and differentiation. The G12C point mutation of the KRAS is found in various cancer cells. It results in the accretion of the GTP-bound active form thus accelerating downstream signalling pathways. Recently ARS family of compounds was suggested as selective covalent inhibitors of the KRASG12C. The most prospective ARS-853 has IC$$_{50}$$ = 1.6 $$\upmu $$M that is too large for the medicinal applications. We demonstrate that calculated dissociation constants K$$_\mathrm{d}$$ are proportional to the experimental IC$$_{50}$$ values and can be utilized as a measure of the inhibitor potency. Using molecular modeling tools we suggest a set of novel compounds with the predicted IC$$_{50}$$ values more than an order of magnitude lower than that of the ARS-853.

Anna M. Kulakova, Anna V. Popinako, Maria G. Khrenova

Identification of Heat Conductivity in (2+1)D Equation as a Function of Time

The considered problem for identifying the time–dependent heat conductivity coefficient from over–posed boundary data belongs to a class of inverse problems. The proposed solution uses a variational approach for identifying the coefficient. The inverse problem is reformulated as a higher–order elliptic boundary–value problem for minimization of a quadratic functional of the original equation. The resulting system consists of a well–posed fourth–order boundary-value problem for the temperature and an explicit equation for the unknown heat conductivity coefficient. The obtained boundary–value problem is solved by means of an iterative procedure, which is thoroughly validated.

Tchavdar T. Marinov, Rossitza S. Marinova

Numerical Calculation of Deformations of Composite Material with Fiber Inclusions

In the numerical simulation of the stress-strain state of a composite material, a problem may arise associated with a large computational complexity due to the grid resolution of a large number of inclusions. It is especially difficult to resolve elongated bodies having linear dimensions that differ by several orders of magnitude, such as fibers. In this paper, we attempt to model fibers in the form of one-dimensional lines, which can significantly reduce the computational complexity of the problem. Comparison of the results for the three-point bending of a concrete block is presented. For the numerical solution, the finite element method was applied using the FEniCS computing platform.

Petr V. Sivtsev, Djulustan Ya. Nikiforov

On the Impact of Reordering in a Hierarchical Semi-Separable Compression Solver for Fractional Diffusion Problems

The performance of a hierarchical solver for systems of linear algebraic equations arising from finite elements (FEM) discretization of Fractional diffusion problems is the subject of this study. We consider the integral definition of Fractional Laplacian in a bounded domain introduced through the Ritz potential. The problem is non-local and the related FEM system has a dense matrix. We utilize the Structured Matrix Package (STRUMPACK) and its implementation of a Hierarchical Semi-Separable compression in order to solve the system of linear equations. Our main aim is to improve the performance and accuracy of the method by proposing and analyzing 2 schemes for reordering of the unknowns. The numerical tests are run on the high performance cluster AVITOHOL at IICT–BAS.

Dimitar Slavchev

Computer Simulation of a Saline Enhanced Radio-Frequency Hepatic Ablation Process

We consider the simulation of thermal and electrical processes, involved in a radio-frequency ablation procedure. Radio-frequency ablation is a low invasive technique for treatment of hepatic tumors, utilizing AC current to destroy unwanted tissues by heating. We simulate an ablation procedure where the needle is bipolar, i.e. no ground pad is attached. Saline solution is injected through the needle during the procedure, creating a cloud around the tip with higher electrical conductivity. This approach is safer for some patients.The mathematical model consists of three parts—dynamical, electrical, and thermal. The energy from the applied AC voltage is determined by solving the Laplace equation to find the potential distribution. After that, the electric field intensity and the current density are directly calculated. Finally, the heat transfer equation is solved to determine the temperature distribution.A 3D image of the patient’s liver is obtained from a magnetic resonance imaging scan. Then, the geometry for the needle is added. The CGAL library is used to obtain an unstructured mesh in the computational domain. We use the finite element method in space, to obtain both the current density and the created temperature field. An unstructured mesh parallel solver is developed for the considered problem. The parallelization approach is based on partitioning the meshes using ParMETIS. Numerical tests show good performance of the developed parallel solver.

Yavor Vutov, Daniel Nikolov, Ivan Lirkov, Krassimir Georgiev

Studying the Influence of Climate Changes on European Ozone Levels

The large-scale air pollution model UNI-DEM (the Unified Danish Eulerian Model) was used together with several carefully selected climatic scenarios. It was necessary to run the model over a long time-interval (sixteen consecutive years) and to use fine resolution on a very large space domain. This caused great difficulties because it was necessary to (a) perform many runs with different input parameters, (b) use huge input files containing the needed meteorological and emission data, (c) resolve many problems related to the computational difficulties, (d) develop and apply carefully prepared parallel codes, (e) exploit efficiently the cache memories of the available computers and (f) store in a proper way huge output files for visualization and animation. It will be described how these difficult tasks have been resolved and many results related to some potentially harmful ozone levels will be presented.

Zahari Zlatev, Ivan Dimov, István Faragó, Krassimir Georgiev, Ágnes Havasi

Monte Carlo Algorithms: Innovative Applications in Conjunctions with Other Methods

Frontmatter

A Revised Wigner Function Approach for Stationary Quantum Transport

The Wigner equation describing stationary quantum transport has a singularity at the point $$k=0$$. Deterministic solution methods usually deal with the singularity by just avoiding that point in the mesh (e.g., Frensley’s method). Results from such methods are known to depend strongly on the discretization and meshing parameters.We propose a revised approach which explicitly includes the point $$k=0$$ in the mesh. For this we give two equations for $$k=0$$. The first condition is an algebraic constraint which ensures that the solution of the Wigner equation has no singularity for $$k=0$$. If this condition is fulfilled we then can derive a transport equation for $$k=0$$ as a secondary equation.The resulting system with two equations for $$k=0$$ is overdetermined and we call it the constrained Wigner equation. We give a theoretical analysis of the overdeterminacy by relating the two equations for $$k=0$$ to boundary conditions for the sigma equation, which is the inverse Fourier transform of the Wigner equation.We show results from a prototype implementation of the constrained equation which gives good agreement with results from the quantum transmitting boundary method. No numerical parameter fitting is needed.

Robert Kosik, Johann Cervenka, Mischa Thesberg, Hans Kosina

Techniques for Statistical Enhancement in a 2D Multi-subband Ensemble Monte Carlo Nanodevice Simulator

Novel numerical techniques are needed in advanced simulation tools in order to accurately describe the behavior of nanoelectronic devices. In this work, two different numerical techniques for statistical enhancement are included in a 2D Multi-Subband Ensemble Monte Carlo (MS-EMC) simulator. First, the consideration of the Fermi-Dirac statistics for the boundary conditions in the ohmic contacts instead of the Boltzmann ones provides a more accurate picture of the distribution function. Second, the energy-dependent weight model reduces the stochastic noise that the superparticles with very high energy introduce in the device performance. In this work, we study the impact of both numerical techniques in two of the potential candidates to extend the CMOS technology: the Fully-Depleted Silicon-On-Insulator (FDSOI) and the FinFET devices. We show that the choice of the Fermi-Dirac statistics has the same impact in both the FDSOI and the FinFET, whereas the energy-dependent weight model has more significance in the FDSOI than in the FinFET because the latter has better electrostatic integrity.

Cristina Medina-Bailon, Carlos Sampedro, Jose Luis Padilla, Luca Donetti, Vihar Georgiev, Francisco Gamiz, Asen Asenov

Efficient Stochastic Algorithms for the Sensitivity Analysis Problem in the Air Pollution Modelling

Sensitivity analysis of the results of large and complicated mathematical models is rather tuff and time-consuming task. However, this is quite an important problem as far as their critical applications are concerned. There are many such applications in the area of air pollution modelling. On the other hand, there are lots of natural uncertainties in the input data sets and parameters of a large-scale air pollution model. Such a model, the Danish Eulerian Model with its up-to-date high-performance implementations, is under consideration in this work. Its advanced chemical scheme (the Condensed CBM IV) takes into account a large number of chemical species and numerous reactions between them.Four efficient stochastic algorithms have been used and compared by their accuracy in studying the sensitivity of ammonia and ozone concentration results with respect to the input emission levels and some chemical reactions rate parameters. The results of our numerical experiments show that the stochastic algorithms under consideration are quite efficient for the purpose of our sensitivity studies.

Tzvetan Ostromsky, Venelin Todorov, Ivan Dimov, Zahari Zlatev

Kinetic Monte Carlo Analysis of the Operation and Reliability of Oxide Based RRAMs

By using a stochastic simulation model based on the kinetic Monte Carlo approach, we study the physics, operation and reliability of resistive random-access memory (RRAM) devices based on oxides, including silicon-rich silica (SiO$$_x$$) and hafnium oxide – HfO$$_x$$ – a widely used transition metal oxide. The interest in RRAM technology has been increasing steadily in the last ten years, as it is widely viewed as the next generation of non-volatile memory devices. The simulation procedure describes self-consistently electronic charge and thermal transport effects in the three-dimensional (3D) space, allowing the study of the dynamics of conductive filaments responsible for switching. We focus on the study of the reliability of these devices, by specifically looking into how oxygen deficiency in the system affects the switching efficiency.

Toufik Sadi, Oves Badami, Vihar Georgiev, Asen Asenov

Multi-Subband Ensemble Monte Carlo Simulator for Nanodevices in the End of the Roadmap

As the scaling of electronic devices approaches to the end of the roadmap quantum phenomena play an important role not only in the electrostatics but also in the electron transport. This work presents the capabilities of a novel implementation of Multi-Subband Ensemble Monte Carlo simulators (MS-EMC) including transport quantum phenomena. In particular, an effective computational scheme of tunneling mechanisms (including S/D tunneling, GLM and BTBT) is shown taking advantage of the main features of semi-classical transport models which are the reduction of the computational requirements and a higher flexibility in comparison to purely full quantum codes.

Carlos Sampedro, Cristina Medina-Bailon, Luca Donetti, Jose Luis Padilla, Carlos Navarro, Carlos Marquez, Francisco Gamiz

A Monte Carlo Evaluation of the Current and Low Frequency Current Noise at Spin-Dependent Hopping

Monte Carlo methods are convenient to model the electron transport due to single electron hopping. The algorithm allows to incorporate a restriction that due to the Coulomb repulsion each trap can only be occupied by a single electron. With electron spin gaining increasing attention, the trap-assisted electron transport has to be generalized to include the electron spin, especially in the presence of an external magnetic field and with transport between ferromagnetic contacts. An innovative Monte Carlo method to deal with the spin-dependent hopping is presented. When the electron spin is taken into account, the escape transition rates are described by transition matrices which describe the coupled spin and occupation relaxation from the trap. The transport process is represented by a cyclic repetition of consecutive electron hops from the source to a trap and from the trap to the drain. The rates do not depend on the previous hops nor on time. The method allows to evaluate the electron current as well as the low frequency current noise at spin-dependent hopping. Our Monte Carlo approach resolves a controversy between theoretical results found in literature.

Viktor Sverdlov, Siegfried Selberherr

Efficient Stochastic Approaches for Multidimensional Integrals in Bayesian Statistics

A fundamental problem in Bayesian statistics is the accurate evaluation of multidimensional integrals. A comprehensive experimental study of quasi-Monte Carlo algorithms based on Sobol sequence combined with Matousek linear scrambling and a comparison with adaptive Monte Carlo approach and a lattice rule based on generalized Fibonacci numbers has been presented. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing high dimensional integrals. It is a crucial element since this may be important to be estimated in order to achieve a more accurate and reliable interpretation of the results in Bayesian statistics which is foundational in applications such as machine learning.

Venelin Todorov, Ivan Dimov

Parallel Multilevel Monte Carlo Algorithms for Elliptic PDEs with Random Coefficients

In this work, we developed and investigated Monte Carlo algorithms for elliptic PDEs with random coefficients. We considered groundwater flow as a model problem, where a permeability field represents random coefficients. The computational complexity is the main challenge in uncertainty quantification methods. The computation contains generating of a random coefficient and solving of partial differential equations. The permeability field was generated using the circulant embedding method. Multilevel Monte Carlo (MLMC) simulation can be based on different approximations of partial differential equations. We developed three MLMC algorithms based on finite volume, finite volume with renormalization and renormalization approximation. We compared numerical simulations and parallel performance of MLMC algorithms for 2D and 3D problems.

Petr Zakharov, Oleg Iliev, Jan Mohring, Nikolay Shegunov

Application of Metaheuristics to Large-Scale Problems

Frontmatter

Generalized Nets Model of Data Parallel Processing in Large Scale Wireless Sensor Networks

The Generalized Nets (GN) approach is an advanced way of parallel processes modeling and analysis of complex systems as Large-scale Wireless Sensor Networks (LWSN). The LWSN such as meteorological and air quality monitoring systems could generate a large amount of data that can reach petabytes per year. The sensor data-parallel processing is one of the possible solutions to reduce inter-node communication to save energy. At the same time, the on-site parallel processing requires additional energy, needed for computational data processing. Therefore, the development of a realistic model of the process is critical for the optimization analysis of every large scale sensor network.In the proposed paper, a new developed GN based model of a sensor nodes data-parallel processing of LWSN with cluster topology is presented. The proposed model covers all the aspects of the inter-node sensor data integration and the cluster-based parallel processes specific for large scale amounts of sensor data operations.

Alexander Alexandrov, Vladimir Monov, Tasho Tashev

Modeling Block Structured Project Scheduling with Resource Constraints

We propose a formal model of block-structured project scheduling with resource constraints, with the goal of designing optimization algorithms. We combine block structured modeling of business processes with results from project scheduling literature. Differently from standard approaches, here we focus on block structured scheduling processes. Our main achievement is the formulation of an abstract mathematical model of block-structured resource-constrained scheduling processes. We tested the correctness and feasibility of our approach using an initial experimental prototype based on Constraint Logic Programming.

Amelia Bădică, Costin Bădică, Doina Logofătu, Ion Buligiu, Liviu Ciora

Solving Combinatorial Puzzles with Parallel Evolutionary Algorithms

Rubik’s cube is the most popular combinatorial puzzle. It is well known that solutions of the combinatorial problems are generally hard to find. If 90$$^\circ $$ clockwise rotations of the cube’s sides are taken as operations it will give a minimal cube’s grammar. By building formal grammar sentences with the usage of the six operations ([L]eft, [R]ight, [T]op, [D]own, [F]ront, [B]ack) all cube’s permutations can be achieved. In an evolutionary algorithms (like genetic algorithms for example) set of formal grammar sentences can be represented as population individuals. Single cut point crossover can be efficiently applied when population individuals are strings. Changing randomly selected operation with another randomly selected operation can be used as efficient mutation operator. The most important part of such global optimization is the fitness function. For better individuals fitness value evaluation a combination between Euclidean and Hausdorff distances is proposed in this research. The experiments in this research are done as parallel program written in C++ and Open MPI.

Todor Balabanov, Stoyan Ivanov, Rumen Ketipov

Multi-objective ACO Algorithm for WSN Layout: InterCriteria Analisys

One of the key objectives during wireless sensor networks deployment is full coverage of the monitoring region with a minimal number of sensors and minimized energy consumption of the network. In this paper we apply multi-objective Ant Colony Optimization (ACO) to solve this hard, from the computational point of view telecommunication problem. The number of ants is one of the key algorithm parameters in the ACO and it is important to find the optimal number of ants needed to achieve good solutions with minimal computational resources. The InterCriteria Analisys is applied in order to study the influence of ants number on the algorithm performance.

Stefka Fidanova, Olympia Roeva

Reachable Sets of Nonlinear Control Systems: Estimation Approaches

The dynamical control systems of a special structure with a combined nonlinearity of quadratic and bilinear kinds presenting in state velocities are studied. The uncertainty in initial states and in system parameters is also assumed and it has a set-membership type when only the bounding sets for unknown items are given. The ellipsoidal estimates of reachable sets are derived using the special structure of studied control system. The techniques of generalized solutions of Hamilton-Jacobi-Bellman (HJB) equations and HJB inequalities together with previously established results of ellipsoidal calculus are applied to find the set-valued estimates of reachable sets as the level sets of a related cost functional. The computational algorithms and related numerical examples are also given.

Tatiana F. Filippova

Jumping Average Filter Parameter Optimization for Pulsar Signal Detection

The paper studies the parameters of an average jumping window algorithm to improve the signal-to-noise ratio while retaining the signal characteristics. The studies were conducted with an FPGA device for processing and detecting a real pulsar signal B0329+54.

Ivan Garvanov, Vladimir Ivanov

Precision in High Dimensional Optimisation of Global Tasks with Unknown Solutions

High dimensional optimisation is a challenge for most of the available search methods. Resolving global and constrained task seems to be even harder and exploration of tasks with unknown solutions can be seen very rare in the literature and requires more research efforts. This article analyses optimisation of high dimensional global, including constrained, tasks with unknown solutions. Reviewed and analysed are experimental results precision, possibilities for trapping in local sub-optima and adaptation to unknown search spaces.

Kalin Penev

An Intuitionistic Fuzzy Approach to the Travelling Salesman Problem

The travelling salesman problem (TSP) is a classical problem in the combinatorial optimization. Its objective is to find the cheapest route of a salesman starting from a given city, visiting all other cities only once and finally come to the same city where he started. There are different approaches for solving travelling salesman problems with clear data. In real life in one situation there may be not possible to get the delivery costs as a certain quantity. To overcome this Zadeh introduce fuzzy set concepts to deal with an imprecision. There exist algorithms for solution of this problem based on fuzzy or triangular intuitionistic fuzzy numbers (private case of intuitionistic fuzzy sets (IFSs)). But many times the degrees of membership and non-membership for certain element are not defined in exact numbers. Atanassov and Gargov in 1989 first identified it in the concept of interval-valued intuitionist fuzzy sets (IVIFS) which is characterized by sub-intervals of unit interval. In this paper, a new type of TSP is formulated, in which the travelling cost from one city to another is interval-valued intuitionistic fuzzy number (IVIFN), depending on the availability of the conveyance, condition of the roads, etc. We propose for the first time the Hungarian algorithm for finding of an optimal solution of TSP using the apparatuses of index matrices (IMs), introduced in 1984 by Atanassov, and of IVIFSs. The example shown in this paper guarantees the effectiveness of the algorithm. The presented approach for solving a new type of TSP can be applied to problems with imprecise parameters and can be extended in order to obtain the optimal solution for other types of multidimensional TSPs.

Velichka Traneva, Stoyan Tranev

Alternatives for Neighborhood Function in Kohonen Maps

In the field of the artificial intelligence artificial neural networks are one of the most researched topics. Multilayer perceptron has a reputation for the most used type of artificial neural network, but other types such as Kohonen maps, generalized nets [1] or combinations with Kalman filter [2, 3] are also very interesting. Proposed by Teuvo Kohonen in the 1980s, self-organizing maps have application in meteorology, oceanography, project prioritization and selection, seismic facies analysis for oil and gas exploration, failure mode and effects analysis, creation of artwork and many other areas. Self-organizing maps are very useful for visualization by data dimensions reduction. Unsupervised competitive learning is used in self-organizing maps and the basic idea is the net to classify input data in predefined number of clusters. When the net has fewer nodes it achieve results similar to K-means clustering. One of the components in the self-organizing maps is the neighborhood function. It gives scaling factor for the distance between one neuron and other neurons in each step. The simplest form of a neighborhood function gives 1 for the closest nodes and 0 for all other, but the most used neighborhood function is a Gaussian function. In this research fading cosine and exponential regulated cosine functions are proposed as alternatives for neighborhood function.

Iliyan Zankinski, Kolyu Kolev, Todor Balabanov

Large Scale Machine Learning: Multiscale Algorithms and Performance Guarantees

Frontmatter

A Modified Gomory-Hu Algorithm with DWDM-Oriented Technology

Optimization of the topology of computer networks based on the classical Gomory-Hu algorithm does not take the specific transfer technology into account. For WDM technology requirements this leads to a redundancy of channel capacities.To reduce the redundancy of allocating network resources, we propose a modification of the Gomory-Hu algorithm which takes account of the specifics of DWDM technology – not at the final stage but already at intermediate stages in the process. The original algorithm proposed by Gomory and Hu involves the decomposition of the graph of the input network into ring subnets of different dimensions. Our modified algorithm takes account of the technical parameters of the DWDM technology for each ring during the decomposition.We illustrate our method by an example. The technique can be extended to large networks, which may lead to a significant economic effect.

Winfried Auzinger, Kvitoslava Obelovska, Roksolyana Stolyarchuk

Additional Contributed Papers

Frontmatter

Adaptive Exponential Integrators for MCTDHF

We compare exponential-type integrators for the numerical time-propagation of the equations of motion arising in the multi-configuration time-dependent Hartree-Fock method for the approximation of the high-dimensional multi-particle Schrödinger equation. We find that among the most widely used integrators like Runge-Kutta, exponential splitting, exponential Runge-Kutta, exponential multistep and Lawson methods, exponential Lawson multistep methods with one predictor/corrector step provide optimal stability and accuracy at the least computational cost, taking into account that the evaluation of the nonlocal potential terms is by far the computationally most expensive part of such a calculation. Moreover, the predictor step provides an estimator for the time-stepping error at no additional cost, which enables adaptive time-stepping to reliably control the accuracy of a computation.

Winfried Auzinger, Alexander Grosz, Harald Hofstätter, Othmar Koch

Convergence Analysis of a Finite Volume Gradient Scheme for a Linear Parabolic Equation Using Characteristic Methods

The first aim of this work is to establish a finite volume scheme using the Characteristic method for non-stationary advection-diffusion equations. The second aim is to analyze the convergence order of this scheme. The finite volume method considered here has been developed recently in [3] to approximate heterogeneous and anisotropic diffusion problems using a general class of nonconforming meshes. The formulation of schemes using the finite volume method of [3] can be obtained by replacing the gradient of the exact solution by a stable and consistent discrete gradient. This work is a continuation of the previous ones [1, 2] in which we derived directly a finite volume scheme for the heat equation along with a convergence analysis.

Fayssal Benkhaldoun, Abdallah Bradji

Implementing a Mesh-Projection Schemes Using the Technology of Adaptive Mesh Refinement

The adaptive mesh refinement (AMR) is a basic method for development of efficient technologies allowing a numerical solution of various problems of continuum mechanics. Numerical modeling of heat-mass transfer, hydrodynamics, structural/fracture mechanics etc. deals with non-linear strongly coupled processes which significantly differ in spatial and temporal scales. Modeling of multiscale correlated phenomena stimulates application of computational technologies using irregular grids, among which the octree meshes are the most known and widely used. Using the created data for tree-structured meshes we developed the dynamic loading balance algorithm aimed at applications to the cluster type parallel computing systems. The developed tools support functionality necessary to implement various numerical models of continuum mechanics. As an example of possible applications we discuss the constructed family of mesh projective approximations to second-order partial differential equations with a variable tensor-type coefficients. The difference operator of the scheme provides energy conservation and possesses the “self-adjoint” property which is inherent to the original differential operator, e.g. in the case of a heat transfer model. We consider numerical results obtained by solution of some model initial boundary value problems for parabolic equations using the developed AMR technique .

Dmitry Boykov, Sergey Grigoriev, Olga Olkhovskaya, Alexey Boldarev

Valuation of European Options with Liquidity Shocks Switching by Fitted Finite Volume Method

In the present paper, we construct a superconvergent fitted finite volume method (FFVM) for pricing European option with switching liquidity shocks. We investigate some basic properties of the numerical solution and establish superconvergence in maximal discrete norm. An efficient algorithm, governing the degeneracy and exponential non-linearity in the problem, is proposed. Results from various numerical experiments with different European options are provided.

Miglena N. Koleva, Lubin G. Vulkov

Space-Time Finite Element Methods for Parabolic Initial-Boundary Value Problems with Non-smooth Solutions

We propose consistent, locally stabilized, conforming finite element schemes on completely unstructured simplicial space-time meshes for the numerical solution of parabolic initial-boundary value problems under the assumption of maximal parabolic regularity. We present new a priori discretization error estimates for low-regularity solutions, and some numerical results including results for an adaptive version of the scheme and strong scaling results.

Ulrich Langer, Andreas Schafelner

MATLAB Implementation of Element-Based Solvers

Rahman and Valdman (2013) introduced a vectorized way to assemble finite element stiffness and mass matrices in MATLAB. Local element matrices are computed all at once by array operations and stored in multi-dimensional arrays (matrices). We build some iterative solvers on available multi-dimensional structures completely avoiding the use of a sparse matrix.

Leszek Marcinkowski, Jan Valdman

The Approach to the Construction of Difference Schemes with a Consistent Approximation of the Stress-Strain State and the Energy Balance of the Medium in Cylindrical Geometry

In the paper, on unstructured metric grids of the theory of the support operator method, on the topological and geometric structure of which minimal reasonable restrictions are imposed, applicable to the symmetricized displacement tensor $$t_\mathbf {u}$$, discrete analogs of self-adjoint and sign-definite operations $$\mathop {\mathrm {div}}(t_\mathbf {u})$$ and $$\mathop {\mathrm {div}}(\mathop {\mathrm {tr}}(t_\mathbf {u})\delta )$$ which are invariants to solid rotations, were obtained for modeling of force fields of elastic processes, as well as approximation of integrals of the form $$\int _{\varOmega }\mathrm{tr}\left( \mathrm{t}_{\mathbf {u}}^{2} \right) dV $$ and $$\int _{\varOmega }\mathrm{tr}^{2} \left( \mathrm{t}_{\mathbf {u}} \right) dV $$, sufficient to simulate elastic-dependent energy balances of the medium taking into account the curvature of space of the cylindrical geometry of the system.

Yury Poveshchenko, Vladimir Gasilov, Viktoriia Podryga, Yulia Sharova

Two-Layer Completely Conservative Difference Scheme of Gas Dynamics in Eulerian Variables with Adaptive Regularization of Solution

For the equations of gas dynamics in Euler variables, a family of two-layer in time completely conservative difference schemes profiled on the space with time weights is constructed. The effective conservation of internal energy in this type of divergent difference schemes is ensured by the absence of constantly operating sources of difference origin in the internal energy equation, producing “computational” entropy (including singularities of the solution). Considerable attention in this work is paid to the methods of constructing regularizing mass, momentum and internal energy flows that do not violate the properties of complete conservatism of difference schemes of this class, to the analysis of their amplitude and admissibility of adaptive use on variable structure grids in space and on implicit layers in time. The developed type of difference schemes can be used to calculate multi-temperature processes (electron and ion temperatures), where for the available number of variables, a single balance equation for the total energy of the medium is not enough.

Orkhan Rahimly, Viktoriia Podryga, Yury Poveshchenko, Parvin Rahimly, Yulia Sharova

Some Problems of Modeling the Impact on Gas Hydrates on the Basis of Splitting by Physical Processes

In this work we consider an equilibrium joint model of a two-component three-phase (water, gas, hydrate) filtration fluid dynamics and two-phase processes in the thawed zone with no gas hydrates, for which splitting by physical processes is performed. Algorithms for the joint calculation of hydrate-containing and thawed states of a fluid-dynamic medium are described. Examples of calculations of technogenic depressive effects near the wells on the dynamics of spatial distributions of gas hydrates’ thawing and the formation of thawed two-phase zones are given.

Parvin Rahimly, Viktoriia Podryga, Yury Poveshchenko, Orkhan Rahimly, Yulia Sharova

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise