2019 | Book

# Progress in Industrial Mathematics at ECMI 2018

Editors: Prof. István Faragó, Prof. Ferenc Izsák, Prof. Péter L. Simon

Publisher: Springer International Publishing

Book Series : Mathematics in Industry

2019 | Book

Editors: Prof. István Faragó, Prof. Ferenc Izsák, Prof. Péter L. Simon

Publisher: Springer International Publishing

Book Series : Mathematics in Industry

This book explores mathematics in a wide variety of applications, ranging from problems in electronics, energy and the environment, to mechanics and mechatronics. The book gathers 81 contributions submitted to the 20th European Conference on Mathematics for Industry, ECMI 2018, which was held in Budapest, Hungary in June 2018. The application areas include: Applied Physics, Biology and Medicine, Cybersecurity, Data Science, Economics, Finance and Insurance, Energy, Production Systems, Social Challenges, and Vehicles and Transportation. In turn, the mathematical technologies discussed include: Combinatorial Optimization, Cooperative Games, Delay Differential Equations, Finite Elements, Hamilton-Jacobi Equations, Impulsive Control, Information Theory and Statistics, Inverse Problems, Machine Learning, Point Processes, Reaction-Diffusion Equations, Risk Processes, Scheduling Theory, Semidefinite Programming, Stochastic Approximation, Spatial Processes, System Identification, and Wavelets.

The goal of the European Consortium for Mathematics in Industry (ECMI) conference series is to promote interaction between academia and industry, leading to innovations in both fields. These events have attracted leading experts from business, science and academia, and have promoted the application of novel mathematical technologies to industry. They have also encouraged industrial sectors to share challenging problems where mathematicians can provide fresh insights and perspectives. Lastly, the ECMI conferences are one of the main forums in which significant advances in industrial mathematics are presented, bringing together prominent figures from business, science and academia to promote the use of innovative mathematics in industry.

Advertisement

In the growing field of clean energy extraction from wind the topic of rotor imbalances of wind turbines is of vital importance for the operation, safety and lifetime consumption of the turbines. The vibrations induced by imbalances lead to damages of important components, high repair expenses, and reduced output. The state of the art procedure to identify rotor imbalance is an expensive on-site procedure. We replace that procedure by a method that only uses the vibrations of the turbine during operation for the imbalance determination. To this end, a mathematical model of the turbine in the shape of an operator or matrix A was constructed that maps the imbalance p to the resulting vibrations u. Thus the problem of reconstructing an unknown imbalance from measured vibration data forms an inverse ill-posed problem that requires regularization techniques for its stable solution. We developed such a method, first for the case that the vibration data are collected during an operation with constant rotational speed. Later the situation of operation with variable speed was investigated and more sophisticated algorithms were developed for that case.

We consider a model of an electric circuit, where differential algebraic equations for a circuit part are coupled to partial differential equations for an electromagnetic field part. An uncertainty quantification is performed by changing physical parameters into random variables. A random quantity of interest is expanded into the (generalised) polynomial chaos using orthogonal basis polynomials. We investigate the determination of sparse representations, where just a few basis polynomials are required for a sufficiently accurate approximation. Furthermore, we apply model order reduction with proper orthogonal decomposition to obtain a low-dimensional representation in an alternative basis.

On the basis of a mixture model ansatz we propose a three-dimensional two-phase flow fiber model for dry spinning processes, which are characterized by solvent evaporation and fiber-air interaction. Employing dimensional reduction this model is embedded into an efficient numerical framework, such that simulations of industrial spinning setups become feasible.

Three-dimensional printing is a process for building new parts with a specified shape. Despite its increasing popularity, printers capable of working with more than one material are yet unavailable. In this work we model the design and the operation of an apparatus for printing with two materials, namely printing a component which includes a previously constructed inner structure. The structure that supports the second material brings difficulties, resulting from the possible “shaded” areas on the printing surface. The problem is addressed assuming the installation of galvanometer mirror scanners as additional light sources on the walls of the printer, and it is modeled in two steps: finding the least number of emitters to use, so that the whole part can be constructed, as well as their position; and assigning them with each cell of the part to be reached. The first step is formulated as a set covering problem. The second is formulated as a linear integer problem and aiming at optimizing two objectives: the number of emitters activated per layer and the quality of the printed part. Methods for solving the problems are described and tested.

The Guyer–Krumhansl equation is an extension to the classical Fourier law that is particularly appealing from a theoretical point of view because it provides a link between kinetic and continuum models and is based on well-defined physical parameters. Here we show how, subjected to a specific boundary condition analogous to the slip conditions for fluids, the Guyer–Krumhansl equation yields promising results in predicting the effective thermal conductivity of nanowires with circular and rectangular cross-sections.

The second-life concept adds monetary value to disused automotive batteries. In turn, this could lead to a higher market share of electric transportation by reducing the total costs for the consumer. However, the ageing of batteries limits their total lifetime and the non-linear ageing behaviour at later stages can diminish the benefit of second-life application. With a model-based ageing study, we show that the lifetime can be doubled by introducing a two-stage anode porosity.

In this paper, simulation approaches for the partial melting of metallic workpieces in micro scale are presented. The underlying model considers heat transport in the whole workpiece, the solid-liquid phase transition assuming a sharp interface and the fluid flow in the liquid part including surface tension effects. Depending on whether the solid-liquid interface is handled either by an interface-tracking or an interface-capturing approach, two different numerical schemes based on an ALE finite element method are presented. A crucial aspect for both methods is the geometrical evolution of the solid-liquid-gas triple junction due to the non-material movement of the solid-liquid interface. Yielding mutual advantages and disadvantages, both methods can be used in alternation in a combined approach. Numerical results are shown for melting the tip of a thin steel wire by a laser beam.

The ability of gas-fired power plants to ramp quickly is used to balance fluctuations in the power grid caused by renewable energy sources, which in turn leads to time-varying gas consumption and fluctuations in the gas network. Since gas system operators assume nearly constant gas consumption, there is a need to assess the risk of these stochastic fluctuations, which occur on shorter time scales than the planning horizon. We present a mathematical formulation for these stochastic fluctuations as a generalization of isothermal Euler equations. Furthermore, we discuss control policies to damp fluctuations in the network.

We will describe a model for the process of synthesizing nanoparticles of a specific size from a liquid solution. Initially, we will consider a single particle model that accounts for monomer diffusion in solution around the particle and kinetic reactions at the particle surface. For the far-field bulk concentration, a mass conservation expression is used. Based on a small dimensionless parameter, we propose a pseudo-steady state approximation to the model. The model is then extended to a system of N particles. Numerical solutions for the time-dependent average particle radius compared against experimental data are shown to have excellent agreement.

We discuss a mean-field simulation of an evacuation scenario. We model the crowd which needs to be evacuated using a probability measure μ. The controls are represented by external assistants formulated by ordinary differential equations. The task of evacuation is written as optimal control problem. Under the assumption that μ has an L 2-density, we state the corresponding first order optimality condition using a Lagrangian approach in the L 2-topology. Based on this we solve the problem with an instantaneous control algorithm. Simulation results of an evacuation scenario underline the feasibility of the approach and show the behaviour that is expected to fit the requirements posed by the cost functional.

Compressing a porous material or injecting fluid into a porous material can induce changes in the pore space, leading to a change in porosity and permeability. In a continuum scale PDE model, such as Biot’s theory of linear poroelasticity, the Kozeny–Carman equation is commonly used to determine the permeability of the porous medium from the porosity. The Kozeny–Carman relation assumes that there will be flow through the porous medium at a certain location as long as the porosity is larger than zero at this location. In contrast, from discrete network models it is known that percolation thresholds larger than zero exist, indicating that the fluid will stop flowing if the average porosity becomes smaller than a certain value dictated by these thresholds. In this study, the difference between the Kozeny–Carman equation and the equation based on the percolation theory, is investigated.

The motion of small particles, attached to fluid interfaces, is important for the production of 2D-ordered micro- and nano-layers, which are applied for the production of solar panels, CCDs, and bio-memory chips. The problem was solved semi-analytically for water/air interface and three-phase contact angles α ≤ 90∘, using the Mehler–Fox transformation (Zabarankin, Proc R Soc A 463:2329–2349, 2007). We propose a numerical method, based on the gauge formulation of the Stokes equations for two viscous fluids, for calculating the velocity field, pressure, and drag force coefficient. The method is applicable for all values of α and fluid viscosities. The weak singularity of the solutions at the three-phase contact line is studied and the respective phase diagram is calculated. The isolation of the type of singularity helps us to construct an efficient second-order numerical scheme, based on the ADI approach. The problem is solved numerically for different particle positions at the interface and ratios of the fluid viscosities.

Refinement of industrial (e.g. car-body) surfaces is performed by evaluation of the shape and distribution of reflection lines or highlight lines. In the paper, we propose a method to semi-automatically evaluate and improve the quality of the highlight line structures. The correspondence between the shape of the highlight lines and the surface parameters is highly complicated and strongly nonlinear. In the paper, a genetic process is proposed for the computation of the parameters (control points) of the surfaces, that corresponds to the corrected highlight line structure.

We derive a quantum model that provides some corrections to the classical motion of nearly localized particles. Our method is based on the assumption that the particle wave function is strongly localized and represented by a Gaussian shape. As an application of our method, we describe the motion of a particle in a 2D non harmonic potential.

The main goal of the study is a validation of a simplified 3D mathematical model for passive admixture spreading in shallow flows. The tested model is oriented to the hydrological and ecological problems, and it can be applied to natural streams like rivers and channels. The earlier proposed model of the ‘elongated, shallow and weakly curved stream’ (Nadolin, Mat Model 21(2):4–28, 2009) takes into account the structure of a stream-bed for evaluation of flow velocity in every point of domain. This is a model advantage, which allows calculation of the admixture spreading in a channel with varying width and depth more accurately than by using in-depth averaged models. For example, we can observe the opposite flow in a near-surface zone, which may be caused e.g. by the wind. The results of numerical experiments show that this reduced 3D model adequately describes the admixture spreading processes in natural streams with acceptable accuracy.

This paper regards the dynamics of gas spot prices on one of European energy exchanges—EEX. A detailed description of the price dynamics is provided alongside with several multi-factor models for daily gas prices. An original approach to the development of such multi-factor daily price models is proposed. Specifically, daily price models taking into account non-integer power of time variable tend perform relatively well on the horizon of several weeks despite the heteroskedasticity of the daily prices.

In this work we explore the use of tempered fractional derivatives in the modelling of transient currents in disordered materials. We particularly focus on the numerical approximation of the involved problems. As it is known, the solutions of fractional differential equations usually exhibit singularities in the origin in time, and therefore, a decreasing of the convergence order of standard numerical schemes may be expected. In order to overcome this, we propose a finite difference scheme on a time graded mesh, in which the grading exponent can be properly chosen, taking into account the singularity type. Numerical results are presented and discussed.

In the business of liquefied petroleum gas (LPG), the LPG cylinder is the main asset and a correct planning of its needs is critical. This work addresses a challenge, proposed at an European Study Group with Industry by a Portuguese energy sector company, where the objective was to define an assets acquisition plan, i.e., to determine the amount of LPG cylinders to acquire, and when to acquire them, in order to optimize the investment. The used approach to find the solution of this problem can be divided in three phases. First, it is necessary to forecast demand, sales and the return of LPG bottles. Subsequently, this data can be used in a model for inventory management. Classical inventory models, such as the Wilson model, determine the Economic Order Quantity (EOQ) as the batch size that minimizes the total cost of stock management. A drawback of this approach is that it does not take into account reverse logistics, which in this challenge (i.e. the return of cylinders) plays a crucial role. At last, because it is necessary to consider the return rate of LPG bottles, reverse logistic models and closed loop supply chain models are explored.

Transport networks are crucial to the functioning of natural systems and technological infrastructures. For flow networks in many scenarios, such as rivers or blood vessels, acyclic networks (i.e., trees) are optimal structures when assuming time-independent in- and outflow. Dropping this assumption, fluctuations of net flow at source and/or sink nodes may render the pure tree solutions unstable even under a simple local adaptation rule for conductances. Here, we consider tree-like networks under the influence of spatially heterogeneous distribution of fluctuations, where the root of the tree is supplied by a constant source and the leaves at the bottom are equipped with sinks with fluctuating loads. We find that the network divides into two regions characterized by tree-like motifs and stable cycles. The cycles emerge through transcritical bifurcations at a critical amplitude of fluctuation. For a simple network structure, depending on parameters defining the local adaptation, cycles first appear close to the leaves (or root) and then appear closer towards the root (or the leaves). The interaction between topology and dynamics gives rise to complex feedback mechanisms with many open questions in the theory of network dynamics. A general understanding of the dynamics in adaptive transport networks is essential in the study of mammalian vasculature, and adaptive transport networks may find technological applications in self-organizing piping systems.

The aim of the article is to solve an inverse problem in order to determine the presence and some properties of an elastic “inclusion” (an unknown object, characterized by elastic properties discriminant from the surrounding medium) from partial observations of acoustic waves, scattered by the inclusion. The method will require developing techniques based on Time Reversal methods. A finite element method based on acousto-elastodynamics equations will be derived and used to solve the inverse problem. Our approach will be applied to configurations modeling breast cancer detection, using simulated ultrasound waves.

In this article, we consider a microscopic model for host-vector disease transmission based on configuration space analysis. Using Vlasov scaling we obtain the corresponding mesoscopic (kinetic) equations, describing the density of susceptible and infected compartments in space. The resulting system of equations can be seen as a generalization to a spatial host-vector disease model.

Severe dengue outbreaks and their consequences point out the need for prognosis and control methods which can be derived by epidemiological mathematical models. In this article we develop a model to describe observed data on hospitalized dengue cases in Colombo (Sri Lanka) and Jakarta (Indonesia). Usually, the disease is epidemiologically modelled with the SIRUV model consisting of the susceptible (S), infected (I) and recovered humans (R) and the uninfected (U) and infected (V ) female mosquitos. Because we do not have any information about the mosquito population we reduce the model to a SIR model which depends on a time-dependent transmission rate β(t) and fit it to the received data sets. To solve this, optimal control theory constructed on Pontryagin’s maximum (minimum) principle is applied in order to reach the solution with numerical optimization methods. The results serve as a basis for different simulations.

Laser-induced thermotherapy (LITT) plays an important role in oncology to treat human liver tumors. LITT is a minimally invasive method causing tumor destruction due to heat ablation and coagulative effects of the tissue. Tumor tissue is much more sensitive to heat than normal healthy tissue. The big advantage of the LITT compared to other minimally invasive procedures, such as microwave ablation or radiofrequency therapy, is that the treatment primarily takes place under MRI control, such that patients are exposed to a small radiation dose. The present paper describes the mathematical modeling of laser-induced thermotherapy and shows simulation results for porcine liver.

The aim of this contribution is to present a fiber-based modeling approach for the dynamic behavior of muscles within the musculoskeletal system. We represent the skeletal system as a rigid multi-body system which is actuated by muscles. We model each muscle as an one-dimensional cable with variable cross section undergoing large deformation and strains. In order to avoid penetration of the muscles and the skeleton, contact is considered. We use our framework to conduct a dynamic forward simulation of a simple upper limb model.

Laser-induced interstitial thermotherapy (LITT) is a medical treatment which attempts to destroy liver tumors by thermal ablation. A realistic real-time simulation shall support the practitioner online in planning the therapy. The heat transfer inside the liver can be described by a PDE system consisting of the so-called bio-heat equation and a radiative transfer model. We model the heat loss due to blood perfusion by a simple sink term with spatially varying coefficient accounting for the presence of vessels. Using PDE-constrained optimization we demonstrate how to fit this parameter in order to minimize the deviation between the predicted and measured temperature.

Laser-induced thermotherapy is a local, minimally invasive treatment for liver tumors, which uses laser radiation to destroy targeted tissue. Many factors, such as the placement of the applicator(s), the length of the treatment and the amount of radiation introduced, affect the success of the treatment. In this work, we focus on controlling the amount of laser power applied during the treatment. This results in a PDE-constrained optimal control problem. Because such problems are computationally expensive to solve directly, a space-mapping approach is used. The coarse model used in the space-mapping method is derived through a novel linearization of the constraining equations and subsequently reduced using proper orthogonal decomposition. An example problem shows the viability of this approach.

Measurement and mathematical description of the corneal surface and of the optical properties of the human eye are actively researched topics. To enhance the mathematical tools used in the field, a novel set of orthogonal functions—called rational Zernike functions—are presented in the paper; these functions are of great promise for correcting certain types of measurement errors that adversely affect the quality of corneal maps. Such errors arise e.g., due to unintended eye-movements, or spontaneous rotations of the eye-ball. The rational Zernike functions can be derived from the well-known Zernike polynomials—the latter polynomials are used widely in eye-related measurements and ophthalmology—via an argument transformation with a Blaschke function. This transformation is a congruent transformation in the Poincaré disk model of the Bolyai-Lobachevsky hyperbolic geometry.

This work presents the integration of a discrete muscle wrapping formulation into an optimal control framework based on the direct transcription method DMOCC (discrete mechanics and optimal control for constrained systems (Leyendecker et al., Optim. Control Appl. Meth. 31(6), 505–528, 2010)). The major contribution lies in the use of discrete variational calculus to describe the entire musculoskeletal system, including the muscle path in a holistic way. The resulting coupled discrete Euler-Lagrange equations serve as equality constraints for the nonlinear programming problem, resulting from the discretisation of an optimal control problem. A key advantage of this formulation is that the structure preserving properties of the integrator enable the simulation to account for large, rapid changes in muscle paths at relativity moderate computation coasts. In particular, the derived muscle wrapping formulation does not rely on special case solutions, has no nested loops, a modular structure, and works for an arbitrary number of obstacles. A biomechanical example shows the application of the given method to an optimal control problem with smooth surfaces.

Laser-induced thermotherapy (LITT) is used to treat liver cancer by inserting a laser applicator into the tumor and applying radiation to heat and destroy it. A mathematical model for the simulation of LITT is compared to experimental results with ex-vivo pig livers.

In this paper we provide a summary on our recent research activity in the field of biomedical signal processing by means of adaptive transformation methods using rational systems. We have dealt with several questions that can be efficiently treated by using such mathematical modeling techniques. In our constructions the emphasis is on the adaptivity. We have found that a transformation method that is adapted to the specific problem and the signals themselves can perform better than a transformation of general nature. This approach generates several mathematical challenges and questions. These are approximation, representation, optimization, and parameter extraction problems among others. In this paper we give an overview about how these challenges can be properly addressed. We take ECG processing problems as a model to demonstrate them.

The introduction of varicella-zoster virus (VZV) vaccines into the routine vaccination schedule is being under consideration in Hungary. Mathematical models can be greatly useful in advising public health policy decision making by comparing predictions for different scenarios, and by quantifying the costs and benefits of immunization strategies. Here we summarize the major challenges, most of them specific to Hungary, in devising and parametrizing dynamical models of varicella transmission dynamics with vaccination policy. We gain some important insights from a simple compartmental model regarding the seasonality and intrinsic oscillation frequency of the disease dynamics, and the sensitivity to the underreporting ratio. Finally, we discuss the ideas for a more complete, realistic model.

Forecasting seizures based on information extracted from neuronal firing has a great potential in controlling closed-loop neurostimulators. For the description of neuronal firing patterns we use self-exiting point processes or Hawkes processes. In fitting them to simulated data, using a large variety of models, we consider both computability and reliability issues related to the maximum likelihood estimation (MLE) method. The models are classified via a single parameter related to stability regimes. The dependence of the accuracy of the individual parameter estimates on different regimes will be explored. We demonstrate the applicability of the MLE method to discriminate between different models with high confidence.

In this study, the two-dimensional steady MHD Stokes and MHD incompressible flows of a viscous and electrically conducting fluid are considered in a lid-driven cavity under the impact of a uniform horizontal magnetic field. The MHD flow equations are solved iteratively in terms of velocity components, stream function, vorticity and pressure by using direct interpolation boundary element method (DIBEM) in which the inhomogeneity in the domain integral is interpolated by using radial basis functions. The boundary is discretized by constant elements and the sufficient number of the interior points are taken. The interpolation points are different from the source points due to the singularities of the fundamental solution. It is found that as Hartmann number increases, the main vortex of the flow shifts through the moving top lid with a decreasing magnitude and secondary flow below it is squeezed through the main flow leaving the rest of the cavity almost stagnant. The increase in M develops side layer near the moving lid, but weakens the effect of Re in the MHD incompressible flow.

We present the dual reciprocity boundary element method (DRBEM) solution to magnetohydrodynamic (MHD) flow in a single and two parallel ducts which are separated by conducting walls of arbitrary thickness in the direction of external magnetic field. The DRBEM discretized coupled MHD convection-diffusion equations in the ducts and the Laplace equations on the shared walls are solved as a whole by using constant boundary elements with the coupled induced current wall conditions. It is shown that, the conducting walls in the double ducts have a strong influence on the currents near the walls, and the core flow increases on the co-flow case but there is a strong reduction in the core flow in the counter-flow case. The coupling between the ducts with conducting thick walls induces reversed flow and counter current flows which may be used for the heat and mass transfer in fusion applications. The proposed numerical scheme using DRBEM captures the well-known MHD flow characteristics when Hartmann number increases.

The new algorithm proposed in Skiba (Int. J. Numer. Methods Fluids (2015), https://doi.org/10.1002/fld.4016 ) is applied for solving linear and nonlinear advection-diffusion problems on the surface of a sphere. The discretization of advection-diffusion equation is based on the use of a spherical grid, finite volume method and the splitting of the operator in coordinate directions. The numerical algorithm is of second order approximation in space and time. It is implicit, unconditionally stable, direct (without iterations) and rapid in realization. The theoretical results obtained in Skiba (Int. J. Numer. Methods Fluids (2015), https://doi.org/10.1002/fld.4016 ) are confirmed numerically by simulating various linear and nonlinear advection-diffusion processes. The results show high accuracy and efficiency of the method that correctly describes the advection-diffusion processes and balance of mass of substance in the forced and dissipative discrete system, and conserves the total mass and L 2-norm of the solution in the absence of external forcing and dissipation.

Gas transportation networks can be modeled by the isothermal Euler equations. Spatial discretization of these equations leads to large-scale systems of nonlinear differential-algebraic equations. Often, model order reduction is necessary for simulation of the discretized network equations under time constraints during operation. Direct reduction of such systems leads to ordinary differential equations which are very difficult to simulate especially if the index of the differential-algebraic equation is greater than one. We consider gas flow through a gas transportation network with more than one supply node which leads to differential-algebraic equations of index 2. We propose an index-aware approach which first automatically decouples the index 2 gas network into differential and algebraic parts leading to reduced-order models which are also differential-algebraic equations of the same index. This approach gives very accurate reduced order models which can be simulated using any standard ordinary differential equation numerical solver leading to accurate solutions.

The location of the surfaces of a double freeform lens, required for laser beam shaping, is governed by a Monge-Ampère type equation. We outline a least-squares solver and demonstrate the performance of the method for an example.

An efficient numerical method is introduced for the solution of space-fractional diffusion problems. We use the spectral fractional Laplacian operator with homogeneous Neumann and Dirichlet boundary conditions. The spatial discretization is based on the matrix transformation method. Using a recent algorithm for the computation of fractional matrix power-vector products and explicit time stepping, we develop a simple and efficient full discretization. The performance of our approach is demonstrated in some numerical experiments.

In this work we consider a Black-Scholes model which consists of a generalization of a fractional Black-Scholes equation model proposed previously. A numerical scheme is presented to solve such type of models and some numerical results are presented for European double-knock out barrier options. In this way, we are able to conclude that this generalized model is able to describe other scenarios than the ones described with the classical (integer-order) and the fractional Black-Scholes models.

Pseudo-parabolic equations have been used to model unsaturated fluid flow in porous media. In this paper it is shown how a pseudo-parabolic equation can be upscaled when using a spatio-temporal decomposition employed in the Peszyńska-Showalter-Yi paper (Appl Anal 88(9):1265–1282, 2009). The spatial-temporal decomposition transforms the pseudo-parabolic equation into a system containing an elliptic partial differential equation and a temporal ordinary differential equation. To strengthen our argument, the pseudo-parabolic equation has been given advection/convection/drift terms. The upscaling is done with the technique of periodic homogenization via two-scale convergence. The well-posedness of the extended pseudo-parabolic equation is shown as well. Moreover, we argue that under certain conditions, a non-local-in-time term arises from the elimination of an unknown.

In this work, a procedure to approximate the solution of special linear third-order matrix differential problems of the type Y (3)(x) = A(x)Y (x) + B(x) with higher-order matrix splines is proposed. An illustrative example is given.

In this work, we showed a fractional derivative based iterative method for solving nonlinear time-independent equation, where the operator is affecting on a Hilbert space. We assumed that it is equally monotone and Lipschitz-continuous. We proved that the algorithm is convergent. We also have tested our method numerically previously on a fluid dynamical problem and the results showed that the algorithm is stable.

This paper is a study of the homogenization of the heat conduction equation, with a homogeneous Dirichlet boundary condition, having a periodically oscillating thermal conductivity and a vanishing volumetric heat capacity. In particular, the volumetric heat capacity equals ε q and the thermal conductivity oscillates with period ε in space and ε r in time, where 0 < q < r are real numbers. By using certain evolution settings of multiscale and very weak multiscale convergence we investigate, as ε tends to zero, how the relation between the volumetric heat capacity and the microscopic structure affects the homogenized problem and its associated local problem. It turns out that this relation gives rise to certain special effects in the homogenization result.

This work deals with the mathematical analysis and numerical solution of a parabolic problem with dynamic hysteresis motivated by electromagnetic field equations. In this case, the values of the magnetic induction depend not only on the current values of the magnetic field, but also on the previous ones and on the velocity at which they have been attained. The hysteresis is modelled by the dynamic Preisach operator. Based upon the definition of dynamic relay, which is introduced and formalized as the solution of a multi-valued ordinary differential equation, the definition of the dynamic Preisach operator is recalled and some of their main properties established. Under suitable assumptions, the well-posedness of a weak formulation of the initial problem is shown and a numerical solution computed.

We construct specific embedded pairs for second and third order optimal strong stability preserving implicit Runge–Kutta methods with large absolute stability regions. These pairs offer adaptive implementation possibility for strong stability preserving (SSP) methods and maintain their inherent nonlinear stability properties, too.

We extend the scheme developed in B. Düring, A. Pitkin, “High-order compact finite difference scheme for option pricing in stochastic volatility jump models”, 2019, to the so-called stochastic volatility with contemporaneous jumps (SVCJ) model, derived by Duffie, Pan and Singleton. The performance of the scheme is assessed through a number of numerical experiments, using comparisons against a standard second-order central difference scheme. We observe that the new high-order compact scheme achieves fourth order convergence and discuss the effects on efficiency and computation time.

We consider the usage of parallel-in-time algorithms of the Parareal and multigrid-reduction-in-time (MGRIT) methodologies for the parallel-in-time solution of the eddy current problem. Via application of these methods to a two-dimensional model problem for a coaxial cable model, we show that a significant speedup can be achieved in comparison to sequential time stepping.

We consider the mean-field approximation of an individual-based model describing cell motility and proliferation, which incorporates the volume exclusion principle, the go-or-grow hypothesis and an explicit cell cycle delay. To utilise the framework of on-lattice agent-based models, we make the assumption that cells enter mitosis only if they can secure an additional site for the daughter cell, in which case they occupy two lattice sites until the completion of mitosis. The mean-field model is expressed by a system of delay differential equations and includes variables such as the number of motile cells, proliferating cells, reserved sites and empty sites. We prove the convergence of biologically feasible solutions: eventually all available space will be filled by mobile cells, after an initial phase when the proliferating cell population is increasing then diminishing. By comparing the behaviour of the mean-field model for different parameter values and initial cell distributions, we illustrate that the total cell population may follow a logistic-type growth curve, or may grow in a step-function-like fashion.

The poroelasticity theory that was originally developed in the context of geophysical applications has been successfully used to model the mechanical behavior of fluid-saturated living bone tissue. In this paper we focus on the numerical solution of the coupled fluid flow and mechanics in Biot’s consolidation model of poroelasticity. The method combines mixed finite elements for Darcy flow and Galerkin finite elements for elasticity. The permeability tensor in the model is allowed to be a nonlinear function on the deformation, since this influence has relevance in the case of biological tissues like bone. We deal with the nonlinear term by considering a semi-implicit in time scheme. We provide the a priori error estimates for the numerical solution of the fully discretized model. For efficiency, we also explore an operator splitting strategy where the flow problem is solved before the mechanical problem, in an iterative process.

In this article, we present a numerical solver for simulating district heating networks. The method applies a local time stepping to networks of linear advection equations. Numerical diffusion as well as the computational effort on each edge is reduced significantly. The combination with high order coupling and reconstruction techniques leads to a very efficient scheme.

Stability is one of the key properties when modeling a physical system on all model hierarchies. We focus on the case of hyperbolic differential algebraic equations dominated by advection at the example of district heating networks. For the transport dynamics, a solution of the corresponding Lyapunov inequality is presented ensuring stability. At the example of an existing network, we numerically demonstrate that stability also translates to the reduced order model (ROM).

In this paper, a linear instability criterion is carried out to show the existence of heterogenous spatial patterns for a degenerate Keller-Segel model. We show that the nonlinear system behaves asymptotically as a linear combination of eigenvectors associated to highest eigenvalues. Finite volume method is implemented to investigate numerically the appearance of heterogeneous spatial patterns in a two-dimensional space for the given model. The nonlinear solution is compared to the predicted nonhomogeneous steady solution obtained by the analysis of the linear instability.

During a machining process, the produced heat results in thermomechanical deformation of the workpiece and thus an incorrect material removal by the cutting tool, which may exceed given tolerances.We present a numerical model based on an adaptive finite element simulation for thermomechanics, which takes into account both the approximation of the temperature field as well as the approximation of the time dependent domain.Control of the milling parameters and tool path can be used to minimize the final shape deviation. A multi-objective approach can try to additionally reduce the tool wear. We present results from a simulation-based optimization approach for a simplified workpiece.

We consider the optimal shape design of a distributor geometry in the context of industrial fiber spinning. In this process a molten polymer is routed from a pipe to a spinneret plate with a larger cross section, where thin fibers, which are then further processed, are spun from the fluid. The residence time or material age of the polymer in the distributor, which is modeled through an additional advection-diffusion-reaction equation, has to be controlled such that fluid stagnation is prevented, since this would cause material degradation and a decrease in the quality of the fibers. In order to optimize the geometry, we formally derive the adjoint equations and the volume formulation of the shape derivative and apply them within a gradient descent method.

Mathematical modeling and numerical simulation of human crowd motion have become a major subject of research with a wide field of applications. A variety of models for pedestrian behavior have been proposed on different levels of description in recent years. Macroscopic pedestrian flow model involving equations for density and mean velocity of the flow is derived in Bellomo and Dogbe (Math. Models Methods Appl. Sci. 18:1317-1345, 2008), Burger et al. (Discrete Contin. Dynam. Systems Ser. B: J. Bridging Math. Sci. 19:1311-1333, 2014), Hughes (Transp. Res. B Methodol. 36:507-535, 2000) and Mahato et al. (Appl. Math. Model. 53:447-461, 2018; Int. J. Adv. Eng. Sci. Appl. Math. 10:41-53, 2018).

In this brief paper, stochastic control theory under incomplete information is applied to mathematical modeling of inland fishery management. The inland fishery resource to be managed is non-renewable in the sense that its reproduction is unsuccessful. The incomplete information comes from the uncertain body growth rate of the individuals due to temporal regime-switching of their foods. We show that finding the most cost-effective harvesting policy of the non-renewable fishery resource reduces to solving a Hamilton-Jacobi-Bellman equation. The equation is numerically solved via a simple finite difference scheme focusing on the major inland fishery resource Plecoglossusaltivelis (P. altivelis: Ayu) in Japan.

In this paper we address the optimal management of an urban road network by combining optimal control of partial differential equations, numerical simulation and optimization techniques. Specifically, we are interested in analyzing the optimal management of the intersections of an urban road network, in order to reduce both atmospheric pollution and traffic congestion. To optimize the network management, we consider a multi-objective optimal control problem, balancing—within a cooperative Pareto framework—a traffic cost function involving travel times and outflows, and a pollution cost function related to contaminant concentrations. In the second part of this work we present numerical tests for a real-world example of ecological interest, posed in the Guadalajara Metropolitan Area (Mexico), where the possibilities of our approach are shown.

Heavy metals enter aquatic systems as a result of very different human activities involving the mining, processing and use of substances containing metal pollutants. Phytoremediation is a cost-effective plant-based approach of remediation for heavy metal-contaminated bodies of water, that takes advantage of the ability of algae to concentrate elements from the environment in their tissues. This paper deals with the optimization of phytoremediation methods, by combining mathematical modelling, optimal control and numerical optimization. In particular, we propose a 2D mathematical model coupling partial differential equations modelling the concentrations of heavy metals, algae and nutrients in large waterbodies. Questions related to determining the minimal quantity of algae to be used, and also to locating the optimal place for such algal mass, are formulated as an optimal control problem for this scenario, and several numerical results for a realistic case are presented.

This paper is concerned with a new mathematical model for intraday electricity trading involving both renewable and conventional generation. The model allows us to incorporate market data e.g. for half-spread and immediate price impact. The optimal trading and generation strategy of an agent is derived as the viscosity solution of a second-order Hamilton-Jacobi-Bellman (HJB) equation for which no closed-form solution can be given. We thus construct a numerical approximation allowing us to use continuous input data. Numerical results for a portfolio consisting of three conventional units and wind power are provided.

We consider the operation of coupled microgrids. Each microgrid consists of a number of residential energy systems, each including an energy storage device. The goal is to determine an optimal energy exchange between the microgrids, which results in a two-level optimization problem. On the lower level, within each microgrid, a grid operator sets up an optimization scheme to coordinate the individual subsystems. We propose a surrogate model based on radial basis functions to approximate this optimization based process and investigate its applicability in the higher level by conducting a case study based on an Australian data set.

We are concerned with optimal control strategies subject to uncertain demands. An Ornstein-Uhlenbeck process describes the uncertain demand. The transport within the supply system is modeled by the linear advection equation. We consider different approaches to control the produced amount at a given time to meet the stochastic demand in an optimal way. In particular, we introduce an undersupply penalty and analyze its effect on the optimal output in a numerical simulation study.

In this paper, we introduce a time-continuous production model that enables random machine failures, where the failure probability depends historically on the production itself. This bidirectional relationship between historical failure probabilities and production is mathematically modeled by the theory of piecewise deterministic Markov processes (PDMPs). On this way, the system is rewritten into a Markovian system such that classical results can be applied. In addition, we present a suitable solution, taken from machine reliability theory, to connect past production and the failure rate. Finally, we investigate the behavior of the presented model numerically in examples by considering sample means of relevant quantities and relative frequencies of number of repairs.

Textiles are present in many applications and are an interesting yet complicated subject. For the industry mostly the macroscopic behavior of textiles is important.In the following article we deal with the buckling behavior of a textile shell under uniaxial tension in the nonlinear regime. The nonlinearity redirects the applied tensional force into bending of the plate in the third direction. To model this behavior, we assume a homogenized shell of von-Kármán type, achieved via homogenization of the textile with given micro-structure. Furthermore, a careful reduction to 1D for the case of a belt-like geometry, i.e., narrow in the second direction, gives a buckling model, which can be optimized with respect to both its shape and retardation. The resulting macroscopic optimization problem with PDE-constraints yields a Pareto-optimization with local minima.

Grasping is a complex human movement. During grasping, when the hand closes around the object, the multibody system changes from a kinematic tree structure to a closed loop contact problem. To better understand work-related disorders or optimize execution of activities of daily life, an optimal control simulation to perform grasping is useful. We simulate the grasping action with a three-dimensional rigid multibody model composed of two fingers actuated by joint torques. The grasping movement is composed of a reaching phase (no contacts) and a grasping phase (closed contacts). The contact constraints are imposed first through distances between the fingers and the object surfaces and then through spherical joints. Thus, the dynamics of grasping is described by a hybrid dynamical system with a given switching sequence and unknown switching times. To determine a favourable trajectory for grasping action, we solve an optimal control problem (ocp). The ocp is solved using the direct transcription method DMOCC, leading to a structure preserving approximation of the continuous problem. An objective involving either the contact polygon centroid or the contol torques is minimized subject to discrete Euler-Lagrange equations, boundary conditions and path constraints. The dynamics of the object to grasp along with Coulomb friction is also taken into account.

Shape optimization is an important tool to increase the reliability of mechanical components. The use of stochastic objective functionals is beneficial as the failure mechanism is usually described using stochastic models. Furthermore, stochastic objective functionals are smoother than, e.g., maxima of point stresses. Here, we consider a stochastic objective functional originating from modeling the failure of ceramic. Ceramic is a material frequently used in industry because of its favorable properties. We follow the approach above by minimizing the component’s probability of failure under a given tensile load. Since the fundamental work of Weibull, the probabilistic description of the strength of ceramics is standard and has been widely applied. The resulting failure probability is used as objective function in PDE constrained shape optimization. Often the constraining PDE is discretized using finite elements, thus needing mesh morphing or re-meshing in every step of the optimization. This can be expensive and it can introduce noise. Instead, we propose to use composite finite elements for discretization. Using the Lagrangian formalism, the shape gradient via the adjoint equation is calculated at low computational cost.

These days, techniques belonging to the research field of Artificial Intelligence (AI) are widely applied and used. Researchers increasingly understand the possibilities and advantages of those techniques for new types of tasks as well as for solving problems which are studied for years and solved by well known solution techniques so far. We focus on Reinforcement Learning (RL) [14] in the context of optimal control problems. We point out the similarities and differences between RL and classical optimal control systems and stress advantages of RL applied to biomechanical systems.

Ensemble-based approaches are very effective in various fields in raising the accuracy of its individual members, when some voting rule is applied for aggregating the individual decisions. In this paper, we investigate how to find and characterize the ensembles having the highest accuracy if the total cost of the ensemble members is bounded. This question leads to Knapsack problem with non-linear and non-separable objective function in binary and multiclass classification if the majority voting is chosen for the aggregation. As the conventional solving methods cannot be applied for this task, a novel stochastic approach was introduced in the binary case where the energy function is discussed as the joint probability function of the member accuracy. We show some theoretical results with respect to the expected ensemble accuracy and its variance in the multiclass classification problem which can help us to solve the Knapsack problem.

Recently, a deterministic queueing model where in customers are given the opportunity to choose between two queues was introduced. The information provided to the customers is not up-to-date but instead customers were given the queue length information some time units in the past. This time delay impacts the dynamical behavior of the queues and hence the decision-making process of the customers. We revisit this queues-with-choice model from a symmetry perspective. We show that the symmetry structure of the model can be used to classify the types and kinds of solutions that can occur. In particular, our results explain why only asynchronous periodic solutions and symmetric equilibrium solutions arise in such model, while synchronous periodic solutions and asymmetric equilibrium solutions do not occur. Our method can also be applied to study similar models with larger number of queues.

Errors-In-Variables (EIV) models in which both input and output data are contaminated by noise have applications in signal processing. We propose a method for constructing non-asymptotic confidence regions for the parameters of EIV models. The method is based on the Leave-out Sign-dominant Correlation Regions (LSCR) principle which gives probabilistically guaranteed confidence region when the input is measured without noise. A regression model is utilized to extend LSCR to EIV systems. The newly established regression vector contains the past outputs and the estimated past inputs. It is shown that the corresponding prediction error has the desired properties such that it can be used to form correlation functions from which confidence regions can be constructed. For any finite number of data points it is proved that the region contains the true parameter with a user-chosen probability.

For model order reduction of quadratic-bilinear systems a moment matching approach has been recently proposed where univariate frequency responses are constructed by means of the associated transform onto the multivariate transfer functions. This approach comes with the obvious advantage of only one-dimensional interpolation frequencies to be considered, but suffers from the arising large size of the involved equation systems and the high computational demands that make the approach impractical for most applications. In this paper, by exploiting the problem-underlying sparse tensor structure, we propose a splitting algorithm that overcomes this curse of dimensionality. We demonstrate the performance of the extended univariate frequency approach and compare it with the well-established multimoment matching approach regarding accuracy, efficiency and need of memory.

Data science is piquing the interest of many large and small organisations and managers are asking universities for information and advice. Typically, the query is: I have many sensors and many measurements, what shall I do with all this data, and how can I get ready for Industry 4.0? The so-called fourth industrial revolution refers to automation and control based on data exchange in a digital environment where measurements are available on all aspects of production. Data science plays an intrinsic role in this scenario and is focused on understanding and using data. Data science requires a challenging mix of capability in data analytics and information technology, and business know-how. Statisticians need to work with computer scientists; data analytics includes machine learning and statistical analysis and these extract meaning from data in different ways. Moving towards increased use of data requires buy in from higher management and board members. Although serious progress involves a holistic approach, exemplars demonstrating potential value are also beneficial. This talk considers the implications for mathematicians and statisticians of the growing industrial demands and discusses examples from ongoing research projects with industrial partners where data visualisation, multivariate statistical process control charts and funnel plots have made an important contribution.

Calibrating subsurface reservoir models with historical well observations leads to a severely ill-posed inverse problem known as history matching. The recently proposed Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has proven to be a successful stochastic technique for solving this inverse problem, but its computational cost can be high in realistic scenarios and it remains challenging to incorporate certain non-Gaussian types of a-priori information into it. In this work we combine the ES-MDA method with Multiple-Point Statistics (MPS) and the K-SVD technique for building sparse dictionaries in order to obtain a novel sparsity-based history matching scheme that preserves non-Gaussian structural prior information and at the same time reduces computational cost. We present numerical experiments in 3D on a modified SPE10 benchmark reservoir model that demonstrate the performance of this new technique.

The aim of this paper is to further investigate the properties of octonion Fourier transform (OFT) of real-valued functions of three variables and its potential applications in signal and system processing. This is a continuation of the work started by Hahn and Snopek, in which they studied the octonion Fourier transform definition and its applications in the analysis of the hypercomplex analytic signals. First, the octonion algebra and the new quadruple-complex numbers algebra are introduced. Then, the OFT definition is recalled, together with some basic properties, proved in some earlier work. The main part of the article is devoted to new properties of the OFT, that allow us to use the OFT in the analysis of multidimensional signals and LTI systems, i.e. derivation and convolution of real-valued signals.

Analog-to-probability conversion is introduced as a new concept for efficient parameter extraction from analog signals that can be described by nonlinear models. The current state of information about these parameters is represented by a multivariate probability distribution. Only a digital-to-analog converter and a comparator are required as acquisition hardware. The introduced approach reduces the number of comparisons to be done by the hardware and therefore the total energy consumption. As a proof of concept the algorithm is implemented on a system-on-chip and compared to a nonlinear least squares approach.

In this paper we present two algorithms for an advanced driver assistance system to investigate road geometry. The proposed solutions can handle both simple and complex scenarios, e.g. construction zones. Our input data consists of segments and polygonal paths, whose clustering gives a proper input for a lane model. The presented methods use thresholding and spectral clustering approaches.

Model-based approach to fault detection of the linear electromagnetic actuators is proposed and validated in the framework of the electromagnetic actuator in the bistable valve operation. The forward uncertainty propagation for the nonlinear mathematical model is performed to determine the probability of faultless operation under aleatoric and epistemic uncertainties. A basic technique is then proposed to make a decision on occurred fault.

Building confidence regions for regression models is of high importance, for example, they can be used for uncertainty quantification and are also fundamental for robust optimization. In practice, these regions are often computed from the asymptotic distributions, which however only lead to heuristic confidence sets. Sign-Perturbed Sums (SPS) is a resampling method which can construct exact, non-asymptotic, distribution-free confidence regions under very mild statistical assumptions. In its standard form, the SPS regions are built around the least-squares estimate of linear regression problems, and have favorable properties, such as they are star convex, strongly consistent, and have efficient ellipsoidal outer-approximations. In this paper, we extend the SPS method to regularized estimates, particularly, we present variants of SPS for ridge regression, LASSO and elastic net regularization.

In this paper we provide an algorithm to decide (or, to help the decision about) whether some repeatedly occurring pattern in a digital image can be considered to have periodical nature or not. Our approach extracts specific image components and represent them by single pixels. To decide upon the gridness nature of the resulting point set we use lattice theory and the LLL algorithm to fit lattices to the point set, and an efficient lattice point counting method of Barvinok. With this work we complete some of our corresponding former results, where the fitting of the lattice ignored possible holes inside the point set. Namely, now after some appropriate transformations we consider the convex hull of the point set which way we can detect and punish such fitted lattice points that fall in holes of the original point set, or equivalently image pattern. As a practical demonstration of our method we present how it can be applied to recognize segmentation errors of atypical/typical pigmented networks in skin lesion images.

Change detection on images of very different time instants from remote sensing databases and up-to-date satellite born or UAV born imaging is an emerging technology platform today. Since outdoor sceneries, principally observation of natural reserves, agricultural meadows and forest areas, are changing in illumination, coloring, textures and shadows time-by-time, and the resolution and geometrical properties of the imaging conditions may be also diverse, robust and semantic level algorithms should be developed for the comparison of images of the same or similar places in very different times. Earlier, a new method, fusion Markov Random Field (fMRF) method has been introduced which applied unsupervised or partly supervised clustering on a fused image series by using cross-layer similarity measure, followed by a multi-layer Markov Random Field segmentation. This paper shows the effective parametrization of the fusion MRF segmentation method for the analysis of agricultural areas of fine details and difficult subclasses.

In this work we examine some stochastic ordering relations, namely the increasing convex order and the Lorenz order, between random variables which arise from a simple lottery setting as well as the relation between their natural continuous variants. We will provide stochastic ordering results for the continuized random variables.

Non-destructive damage detection has become a very active research topic recently. This paper is devoted to the processing of time-harmonic thermograms (color images of one side of the sample to be inspected, obtained by a thermal camera) for structural health monitoring of thin plates. Our approach is based on the evaluation of an indicator function, the so-called topological derivative, which will identify the regions inside the plate where damage is located.