Skip to main content
Top

Open Access 23-09-2022

The evolution of the law of random processes in the analysis of dynamic systems

Published in: Meccanica | Issue 10/2022

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The paper presents a method for determining the evolution of the cumulative distribution function of random processes which are encountered in the study of dynamic systems with some uncertainties in the characterizing parameters. It is proved that these distribution functions are the solution of a partial differential equation, whose coefficients can be determined once the dynamic system has been solved, and whose numerical solution can be obtained with the finite difference method. Two simple problems are solved here both explicitly and numerically, then the obtained results are compared with each other.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

The analysis of structures subjected to dynamic loads is now generally conducted using refined mechanical models and adequate numerical techniques. However, even when the problem is well-posed so that both the existence and the uniqueness of the solution are guaranteed, some parameters used to describe the geometric and mechanical characteristics of the structure and the external actions are generally affected by uncertainties that must be taken into account in the analysis. These input parameters are thus represented by random variables, which are defined on an appropriate probability space and whose law is supposed to be known. A stochastic process is thus obtained which is parameterized over time and has the Euclidean space as state space.
In many applications, it is necessary to determine the probability distribution of some output quantities. It is a question of determining the evolution over time of the law of some random process which is defined in terms of the solution of the dynamic system. This objective is generally achieved using the Monte Carlo method which, at least in principle, gives the possibility to consider complicated models and geometries without having to resort to unrealistic simplifying hypotheses. On the other hand, this method can require long computation times which may not be compatible with the required precision [1].
A different way to deal with the problem is to use the generalized density evolution equation, which is a consequence of the "principle of preservation of probability" [2]. This method leads to writing a linear PDE whose solution gives the probability density function for the quantities of interest. The coefficients of this equation at each instant depend on the value of the state variables and then can be obtained from the (deterministic) solution of the dynamic system. The method of the generalized density evolution equation has been implemented into the MADY code, which has already the routines for dynamic analysis of plane, three-dimensional, or beam and shell-based structures [3], and has been applied to the study of some masonry constructions [4, 5]. For these structures, the uncertainties have particular relevance, due to the constitutive characteristics of the material and the geometry.
In this paper, we deduce an equation similar to the one proposed in [2] but which allows calculating the cumulative distribution function directly, instead of the density function, of the chosen random variable. Indeed, while the former is always a locally integrable function, it may happen that the latter is defined only in a distributional sense. The deduction is made using classical theorems of the measure theory; although most notions discussed below can be presented in their intuitive sense, we prefer to give explicit definitions and proofs to avoid misunderstandings. Thus, both the approach proposed in this paper and the one proposed in [2] are rigorously justified. Moreover, with the help of the differential forms and the coarea formula, relations are deduced for the explicit computation of both the probability density function and the cumulative distribution function; these are especially useful if the number of random input parameters is greater than one.
Finally, two examples are presented in which the evolution of the probability distribution of a chosen output parameter is explicitly calculated. The solution is then compared with the numerical one obtained with the MADY code, which has meanwhile been updated with the new numerical procedures presented in this paper.

2 Background and notations

Let \((\Omega ,{\mathcal {A}},{\mathbb {P}})\) be a probability space. Here \(\Omega\) is the set of the outcomes, \({\mathcal {A}}\) is the \(\sigma\)-algebra on \(\Omega\) made of all the events and \({\mathbb {P}}: {\mathcal {A}}\rightarrow [0,1]\) is a probability measure, i.e. a positive measure on \((\Omega ,{\mathcal {A}})\) such that \({\mathbb {P}}(\Omega )=1.\) Moreover, let \({\mathbb {R}}^n\) and \({\mathcal {B}}({\mathbb {R}}^n)\) be the n-dimensional Euclidean space and its corresponding Borel \(\sigma\)-algebra, respectively.
A map \({\mathbf{X}}:\Omega \rightarrow {\mathbb {R}}^n\) is said to be a (vector) random variable if it is \({\mathcal {B}}({\mathbb {R}}^n)\)-measurable, i.e. if
$$\begin{aligned} \{{\mathbf{X}}\in {B}\}\in {{\mathcal {A}}}, \text{ for } \text{ each } B\in {{\mathcal {B}}({\mathbb {R}}^n)}, \end{aligned}$$
where, as usual, \(\{{\mathbf{X}}\in {B}\}\) denotes \({\mathbf{X}}^{-1}(B)\). The measure \(\mu _X\) denotes the law of \({\mathbf{X}}\) or the image of measure \({\mathbb {P}}\) under \({\mathbf{X}}\) [6], i.e. the measure defined on \(({\mathbb {R}}^n,{\mathcal {B}}({\mathbb {R}}^n))\) by
$$\begin{aligned} \mu _X(B)={\mathbb {P}}(\{{\mathbf{X}}\in {B}\}), \text{ for } \text{ each } B\in {{\mathcal {B}}({\mathbb {R}}^n)} \end{aligned}$$
and the function \(F_X: {\mathbb {R}}^n\rightarrow [0,1]\), defined by
$$\begin{aligned} F_X{(a_1,a_2,\ldots ,a_n)}=\mu _X(\{{\mathbf{x}}\in {\mathbb {R}}^n:-\infty <x_i\le {a_i}, i=1,2,\ldots ,n\}), \end{aligned}$$
is called cumulative distribution function of X.
Let \({\mathcal {L}}^n\) be the Lebesgue measure on \(({\mathbb {R}}^n,{\mathcal {B}}({\mathbb {R}}^n))\). Measure \(\mu _X\) is said to be absolutely continuous (with respect to \({\mathcal {L}}^n\)) if \(\mu _X(B)=0\) for every set \(B\in {\mathcal {B}}({\mathbb {R}}^n))\) with \({\mathcal {L}}^n(B)=0\). In this case, the Radon-Nikodym theorem [6] guarantees the existence of a positive integrable function \(p_X\) such that
$$\begin{aligned} \mu _{X}(B)=\int _B p_{X}({\mathbf{x}}) d{\mathbf{x}} \end{aligned}$$
(1)
for every \(B\in {\mathcal {B}}({\mathbb {R}}^n))\). Function \(p_{X}\) is called the (joint) probability density function of the vector random variable X and it holds
$$\begin{aligned} F_X{(x_1,x_2,\ldots ,x_n)}=\int _{-\infty }^{x_1}\ldots \int _{-\infty }^{x_n} p_X(\xi _1,\xi _2,\ldots ,\xi _n)d\xi _1,d\xi _2,\ldots ,d\xi _n \end{aligned}$$
or
$$\begin{aligned} p_X{(x_1,x_2,\ldots ,x_n)}=\frac{\partial {F_X{(x_1,x_2,\ldots ,x_n)}}}{\partial {x_1}\partial {x_2\ldots \partial {x_n}}}. \end{aligned}$$
(2)
If \(\mu _X\) is not absolutely continuous, then its probability density function does not exist as an integrable function. However, it can be defined as a generalized function by interpreting (2) as a distributional derivative [7].
Let \({\mathbf{X}}\) be a vector random variable and \(f:{{\mathbb {R}}^{n}}\rightarrow {{\mathbb {R}}^{m}}\) a Borel function; then \({\mathbf{Y}}=f{\circ }{\mathbf{X}}\) is again a vector random variable and, for each \({B}\in {\mathcal {B}}({{\mathbb {R}}^{n}})\),
$$\begin{aligned} \mu _{Y}(B)=\mu _{X}\big (f^{-1}(B)\big). \end{aligned}$$
(3)
If \({\mathbf{X}}\) has a probability density function \(p_{X}\), then
$$\begin{aligned} \mu _{Y}(B)=\mu _{X}\big (f^{-1}(B)\big )=\int _{f^{-1}(B)} p_X({\mathbf{x}})d{\mathbf{x}} \end{aligned}$$
(4)
by (1) and (3). Moreover, if f is such that [8]
$$\begin{aligned} {\mathcal {L}}^m(B)=0\Rightarrow {\mathcal {L}}^n\left( f^{-1}(B)\right) =0 \end{aligned}$$
for every Borel subset B of \({\mathbb {R}}^n\), then, in view of (4), \(\mu _Y\) is (finite and) absolutely continuous with respect to \({\mathcal {L}}^m\) and thus it admits a probability density function \(p_Y\). In the particular case when \(n=m\) and f is a diffeomorphism, i.e. a bijective application such that both f and \(f^{-1}\) are continuously differentiable, then
$$\begin{aligned} p_Y({\mathbf{y}})=p_X\left( f^{-1}({\mathbf{y}})\right) \left| Jf^{-1}({\mathbf{y}})\right| \end{aligned}$$
where \(Jf^{-1}\) denotes the Jacobian of \(f^{-1}\).
More generally, suppose that there exists a countable family \((B_i)_{i\in {I}}\) of pairwise disjoint open sets of \({\mathbb {R}}^n\), such that (i) the event
$$\begin{aligned} B=\bigcup _{i\in {I}}\{{\mathbf{Y}}\in {B_i}\} \end{aligned}$$
has probability \({\mathbb {P}}(B)\) equal to 1; (ii) for each \(i\in {I}\), the restriction \(f_i\) of f to \(B_i\) is a diffeomorphism of \(B_i\) onto an open set \(C_i\subset {\mathbb {R}}^n\). Under this assumption, define
$$\begin{aligned} (p_Y)_i({\mathbf{y}})=p_X\left( f_i^{-1}({\mathbf{y}})\right) \left| Jf_i^{-1}({\mathbf{y}})\right| \chi _{C_i}({\mathbf{y}}) \end{aligned}$$
where
$$\begin{aligned} \chi _{C_i}({\mathbf{y}})=\left\{ \begin{array}{rl} 1 &{}\text{ if } {\mathbf{y}}\in {C_i}\\ 0 &{}\text{ otherwise } \end{array} \right. \end{aligned}$$
is the indicator function of \(C_i\). Then
$$\begin{aligned} p_{{\mathbf{y}}}=\sum _{i\in {I}}(p_{{\mathbf{y}}})_i. \end{aligned}$$
A stochastic process \(\{{\mathbf{X}}_t, t\in {D}\}\), with \(D=[0,{\bar{t}}]\) a real interval, is a family of vector random variables indexed by a parameter t, defined on a common probability space \((\Omega ,{\mathcal {A}},{\mathbb {P}})\) and all with their values in \(({\mathbb {R}}^n,{\mathcal {B}}({\mathbb {R}}^n))\), which is called the state space. By definition, for each \(t\in {D}\), \({\mathbf{X}}_t\) is an \({\mathcal {A}}\)-misurable function and, for each \(\omega \in \Omega\), \(\{{\mathbf{X}}_t(\omega ), t\in {D}\}\) is a function defined in D that is called \(\textit{sample function, realizzation}\) or \(\textit{trajectory}\) of the process.
Let be \(z\in {{\mathbb {R}}}\) and \(f:{\mathbb {R}}^m\rightarrow {\mathbb {R}}\) a smooth function. Then, the set
$$\begin{aligned} \{z=f\}=\big \{\big \{{\mathbf{x}}\in {\mathbb {R}}^m:z-f({\mathbf{x}})=0 \text{ and } |\nabla {f({\mathbf{x}})}|>0\}, \end{aligned}$$
if it is not empty, is a regular hypersurface of \({\mathbb {R}}^m\) whose orientation is determined by the unit normal vector \({\mathbf{n}}=\nabla {f}/|\nabla {f}|\). Let us put
$$\begin{aligned} f_{x_j}=\frac{\partial {f}}{\partial {x_j}} \end{aligned}$$
and denote by \(\mu _S\) the volume form on \(\{z=f\}\) [9], i.e. the differential form
$$\begin{aligned} \mu _S=\sum _{j=1}^{n}\frac{(-1)^{j-1}}{|\nabla {f}|}f_{x_j}dx_1\wedge \ldots \wedge {d{\hat{x}}_j}\wedge \ldots \wedge {dx_n}, \end{aligned}$$
(where \(d{\hat{x}}_j\) means “omit the factor \(dx_j\)”). Recalling that on the hypersurface \(\{z=f\}\) we have
$$\begin{aligned} df=\sum _{j=1}^{n}f_{x_j}dx_j=0, \end{aligned}$$
it is easy to verify that, for each \(j=1,\ldots ,n\) it turns out that [7]
$$\begin{aligned} \mu _S=\frac{(-1)^{j-1}|\nabla {f}|}{f_{x_j}}dx_1\wedge \ldots \wedge {d{\hat{x}}_j}\wedge \ldots \wedge {dx_n}, \end{aligned}$$
(5)
everywhere \(f_{x_j}\not =0\) holds.
For each \(\alpha \le {n}\), let \({\mathcal {H}}^{\alpha }\) be the Hausdorff measure in \({\mathbb {R}}^n\) [10], defined in such a way that \({\mathcal {H}}^{n}={\mathcal {L}}^{n}\). Then for every function g which is integrable on \(\{z=f\}\) we have
$$\begin{aligned} \int _{\{z=f\}}gd{\mathcal {H}}^{n-1}=\int _{\{z=f\}}g\mu _S. \end{aligned}$$
(6)
The following useful result is a consequence of the coarea formula. Let \(f:{\mathbb {R}}^m\rightarrow {\mathbb {R}}\) be a smooth function such that \(|\nabla {f({{\mathbf{x}}})}|>0\), and \(g:{\mathbb {R}}^m\rightarrow {\mathbb {R}}\) be an integrable function. Moreover, let us put
$$\begin{aligned} {\{z>f\}}=\{{\mathbf{x}}\in {{\mathbb {R}}^m}:z-f({\mathbf{x}})>0\}. \end{aligned}$$
Then
$$\begin{aligned} \int _{\{z>f\}}gd{\mathbf{x}}=\int _{-\infty }^z d\zeta \int _{\{\zeta =f\}} \frac{g}{|\nabla {f}|}d{\mathcal {H}}^{m-1} \end{aligned}$$
(7)
and, in particular,
$$\begin{aligned} \frac{d}{dz}\int _{\{z>f\}}gd{\mathbf{x}}= \int _{\{z=f\}} \frac{g}{|\nabla {f}|}d{\mathcal {H}}^{m-1}. \end{aligned}$$
(8)

3 Stochastic dynamic system

The equation of motion of a body, discretized by the finite element method with respect to the space variable is reduced to a system of ODEs
$$\begin{aligned} {\mathbf {M}}\ddot{\mathbf{y}}+{\mathbf{f}(\dot{{\mathbf{y}}},{\mathbf{y}}}) ={\mathbf {B}}({\mathbf{y}},t)\varvec{\xi }(t),\qquad {\mathbf{y}}(t_0)={\mathbf{y}}_0, \qquad \dot{{\mathbf{y}}}(t_0)=\dot{{\mathbf{y}}}_0, \end{aligned}$$
(9)
where \(t\in {D}\) is the time and \({\mathbf{y}}, \dot{{\mathbf{y}}}\) and \(\ddot{{\mathbf{y}}}\) are the displacement, velocity and acceleration vector, respectively, which are defined on D and take their values in \({\mathbb {R}}^n\). \({\textbf {M}}\) is the mass matrix, \({\mathbf {f}}\) is the internal force vector (including damping and restoring force), \({\textbf {B}}\) is the input force influence matrix, \({\varvec{\xi }}\) is the external excitation vector and \({\mathbf{y}}_0\) and \(\dot{{\mathbf{y}}}_0\) are the initial displacement and velocity vector, respectively. By introducing the state vector
$$\begin{aligned} {\mathbf{x}}= \left\{ \begin{array}{cc} \dot{{\mathbf{y}}} \\ {\mathbf{y}} \end{array} \right\} , \end{aligned}$$
Equation (9) can be rewritten as
$$\begin{aligned} \dot{{\mathbf{x}}}={\mathbf {A}}({\mathbf{x}},t)+{\mathbf {B}}({\mathbf{x}},t) {\varvec{\xi }}(t),\qquad {\mathbf{x}}(t_0)={\mathbf{x}}_0 \end{aligned}$$
(10)
where
$$\begin{aligned} {\mathbf {A}}({\mathbf{x}},t)= \left( \begin{array}{cc} -{\mathbf {M}}^{-1}{\mathbf {f}}({\mathbf{x}})\\ \dot{{\mathbf{y}}} \end{array} \right) , \qquad {\mathbf {B}}({\mathbf{x}},t)= \left( \begin{array}{cc} {\mathbf {M}}^{-1}{\mathbf {B}}({\mathbf{x}},t)\\ {\mathbf {0}} \end{array} \right) . \end{aligned}$$
If randomness is present, coming from the initial conditions, the excitations, or the properties of the system, a random state equation can be written as [2]
$$\begin{aligned} \dot{{\mathbf{x}}}={\mathbf {G}}({\varvec{\theta }}, {\mathbf{x}}_0, t) \end{aligned}$$
(11)
where \({\varvec{\theta }}=(\theta _1,\theta _2,\ldots \theta _m)\in \Lambda \subset {{\mathbb {R}}^m}\) is the vector of all random parameters, i.e. the values assumed by the vector random variable \({\varvec{\Theta }}:\Omega \rightarrow {\mathbb {R}}^m\), that is supposed to be time-independent and to have a (joint) probability density function \(p _{{\varvec{\Theta }}}\).
If the (deterministic) problem (10) is well posed, then for each choice of \({\varvec{\theta }}\) and \({\mathbf{x}}_0\), Eq. (11) has one and only one solution
$$\begin{aligned} {\mathbf{x}}={\mathbf {H}}({\varvec{\theta }}, {\mathbf{x}}_0,t),\qquad {\mathbf {H}}({\varvec{\theta }}, {\mathbf{x}}_0,0)={\mathbf{x}}_0 \end{aligned}$$
with \({\mathbf {H}}:\Lambda \times {\mathbb {R}}^{2n}\times {D}\rightarrow {\mathbb {R}}^{2n}\) a suitable smooth function.
Below, for sake of simplicity, we omit to explicitly indicate the dependence on \({\mathbf{x}}_0\) and write
$$\begin{aligned} {\mathbf{x}}_t={\mathbf {H}}({\varvec{\theta }}, t) \end{aligned}$$
to denote the solution of (11).
In applications, we are interested in considering stochastic processes of the type
$$\begin{aligned} \textit{Z}_t({\varvec{\theta }})=\textit{Z}({\varvec{\theta }},t)=\psi \circ {\mathbf {H}}({\varvec{\theta }}, t), \end{aligned}$$
with
$$\begin{aligned} \textit{Z}({\varvec{\theta }},0)=\psi ({\mathbf{x}}_0)=z_0, \end{aligned}$$
(12)
and in determining the evolution of the law of \(\textit{Z}_t\). Here \(\psi :{\mathbb {R}}^{2n}\rightarrow {\mathbb {R}}\) is a deterministic smooth function. To this aim, for each \(t\in {D}\), let us consider the family of measures \(\{\mu _t^{\theta }:{\varvec{\theta }}\in {\Lambda }\}\) which are defined on \({\mathbb {R}}\) by
$$\begin{aligned} \mu _t^{\theta }(B)=\chi _B\circ {\textit{Z}}_t= \left\{ \begin{array}{cc} 1 &{} \text{ if } \textit{Z}_t({\varvec{\theta }})\in {B},\\ 0&{} \text{ otherwise }, \end{array} \right. \end{aligned}$$
(13)
for every Borel subset B of \({\mathbb {R}}\), with \(\chi _B\) the indicator function of B. Measure \(\mu _t^{\theta }\) is the law of the conditional probability of \(\textit{Z}_t\), given \({\varvec{\Theta }}\), and sometimes it is denoted by \(p_{\textit{Z}\varvec{\Theta }}(\textit{z}|\varvec{\Theta }={\varvec{\theta }},t)\). Indeed, once \({\varvec{\theta }}\) and t are fixed, \(\mu _t^{\theta }\) is the Dirac measure on \({\mathbb {R}}\), concentrated in the point \(\textit{Z}_t({\varvec{\theta }})\). The cumulative distribution function of \(\mu _t^{\theta }\) is the real function
$$\begin{aligned} F_t^{\theta }(\textit{z})=\mu _t^{\theta }((-\infty ,\textit{z}])= \left\{ \begin{array}{cc} 1 &{} \text{ if } \textit{z}\ge {\textit{Z}}_t({{\varvec{\theta }}}),\\ 0&{} \text{ otherwise }. \end{array} \right. \end{aligned}$$
(14)
Let \({\mathcal {C}}_0\) be the space of the continuous real functions that are compactly supported in \({\mathbb {R}}\), with the maximum norm \(|\cdot | _{{\mathcal {C}}_0}\). For every function \(f\in {{\mathcal {C}}_0}\) we have
$$\begin{aligned} \int _{{\mathbb {R}}} f(\textit{z})\mu _t^{\theta } (d\textit{z})=f(\textit{Z}_t({\varvec{\theta }})) \end{aligned}$$
(15)
(this is true for the indicator functions of measurable sets by (13) and then for simple functions so that the general case follows from an approximation argument). Therefore, the map \({\varvec{\theta }}\rightarrow \int _ {{\mathbb {R}}} f(\textit{z})\mu _t^{\theta } (d\textit{z})\) is measurable on \(\Lambda\).
Proposition
(i) For fixed \(t\in {D}\), there is one and one measure \(\mu _t\) on \({\mathbb {R}}\) such that
$$\begin{aligned} \int _{{\mathbb {R}}} f(\textit{z})\mu _t(d\textit{z})=\int _{\Lambda } p_{{\varvec{\Theta }}}({\varvec{\theta }}) d{\varvec{\theta }} \int _{{\mathbb {R}}} f(\textit{z})\mu _t^{\theta }(d\textit{z}) \end{aligned}$$
(16)
for each \(f\in {{\mathcal {C}}_0}\).
(ii) \(\mu _t\) is the law of \(\textit{Z}_t\), i.e., for every Borel subset B of \({\mathbb {R}}\),
$$\begin{aligned} \mu _t(B)={\mathbb {P}}\left( \textit{Z}_t^{-1}(B)\right) . \end{aligned}$$
Proof
(i) Let be
$$\begin{aligned} \Phi (f)=\int _{\Lambda } p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }} \int _{{\mathbb {R}}} f(\textit{z})\mu _t^{\theta }(d\textit{z}). \end{aligned}$$
Then, noting that
$$\begin{aligned} \int _{{\mathbb {R}}}\mu _t^{\theta }(d\textit{z})=\mu _t^{\theta }({\mathbb {R}})=1 \qquad \text{ and } \qquad \int _{\Lambda } p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }}={\mathbb {P}}(\Omega )=1 \end{aligned}$$
we obtain
$$\begin{aligned} |\Phi (f)|\le {\int _{\Lambda } p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }}\int _{{\mathbb {R}}} |f(\textit{z})|\mu _t^{\theta }(d\textit{z})}\le {|f|_{{\mathcal {C}}_0}\int _{\Lambda } p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }}}\int _{{\mathbb {R}}}\mu _t^{\theta }(d\textit{z})=|f|_{{\mathcal {C}}_0}. \end{aligned}$$
Thus, \(\Phi\) is a linear and continuous functional on \({\mathcal {C}}_0\) and (i) follows from the Riesz representation theorem [11].
(ii) For every Borel subset B of \({\mathbb {R}}\) we have
$$\begin{aligned} \mu _t(B)= & {} \int _{{\mathbb {R}}}\chi _B d\mu _t(d\textit{z})=\int _{\Lambda }p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}\int _{{\mathbb {R}}}\chi _B(\textit{z})\mu _t^{\theta }(d\textit{z})=\int _{\Lambda }\chi _B(\textit{Z}_t)({\varvec{\theta }}))p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}=\nonumber \\&\int _{\Lambda }p_{\varvec{\Theta }}({\varvec{\theta }})\chi _{\textit{Z}_t^{-1}(B)}({\varvec{\theta }})d{\varvec{\theta }})= \int _{\textit{Z}_t^{-1}(B)}p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}={\mathbb {P}}(\textit{Z}_t^{-1}(B)), \end{aligned}$$
(17)
where the second identity follows from (16), the third follows from (15) and the fourth follows from the relation \(\chi _B\circ {\textit{Z}_t}=\chi _{\textit{Z}_t^{-1}(B)}\). \(\square\)
If we denote by \(F_\textit{Z}(\textit{z},t)\) the cumulative distribution function of \(\mu _t\), with the help of (16) and (14) we can write
$$\begin{aligned} F_{{\textit{Z}}}(\textit{z},t)=\mu _t( (-\infty ,\textit{z}])=\int _{{\mathbb {R}}}\chi _{ (-\infty ,\textit{z}]}\mu _t(\textit{z})(d\textit{z})=\int _{\Lambda }p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}\int _{{\mathbb {R}}}\chi _{(-\infty ,\textit{z}]}\mu _t^{\theta }(dz)= \end{aligned}$$
$$\begin{aligned} \int _{\Lambda }p_{\varvec{\Theta }}({\varvec{\theta }})\mu _t^{\theta }((-\infty ,\textit{z}])d{\varvec{\theta }}=\int _{\Lambda }p_{\varvec{\Theta }}({\varvec{\theta }})\chi _{\{\textit{Z}_t\le {\textit{z}}\}}d{\varvec{\theta }}=\int _{\{\textit{Z}_t\le {\textit{z}}\}}p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}, \end{aligned}$$
(18)
where
$$\begin{aligned} {\{\textit{Z}_t\le {\textit{z}}\}}=\{{\varvec{\theta }}\in {\Lambda }:\textit{z}-\textit{Z}_t({\varvec{\theta }})\ge 0\}. \end{aligned}$$
(19)
Moreover, under the hypothesis that, for each \(t\in {D}\), \(\textit{Z}_t\) is a smooth function with
$$\begin{aligned} \inf _{{\varvec{\theta }}\in \Lambda }|\nabla {Z}_t({\varvec{\theta }})|>0, \end{aligned}$$
(20)
from (18) to (7) it follows
$$\begin{aligned} F_Z(z,t)=\int _{-\infty }^z d\zeta \int _{\{Z_t=\zeta \}} \frac{p_{\varvec{\Theta }}({\varvec{\theta }})}{|\nabla {Z}_t({\varvec{\theta }})|}d{\mathcal {H}}^{m-1}({\varvec{\theta }}), \end{aligned}$$
(21)
where
$$\begin{aligned} \{Z_t=\zeta \}=\{{\varvec{\theta }}\in {\Lambda }:\zeta -Z_t({\varvec{\theta }})=0\}. \end{aligned}$$
In many applications of interest, for each \(t\in {D}\) and \(B\in {{\mathcal {B}}({\mathbb {R}})}\), \(Z_t\) is such that
$$\begin{aligned} {\mathcal {L}}(B)=0\Rightarrow {\mathcal {L}}^m(Z_t^{-1}(B))=0, \end{aligned}$$
so that in view of (17), it also results \(\mu _t(B)=0\) and therefore \(\mu _t\) is absolutely continuous with respect to \({\mathcal {L}}\) [and \(F_Z(\cdot ,t)\) is an absolutely continuous real function] [6]. Then, by the Radon-Nikodym theorem there exists a probability density function \(p_Z(z,t)\) of \(\mu _t\), i.e. for every integrable function f it holds
$$\begin{aligned} \int _{\mathbb {R}} f(z)\mu _t(dz)=\int _{\mathbb {R}}f(z)p_Z(z)dz. \end{aligned}$$
Thus, in this case from (21) to (8), we deduce
$$\begin{aligned} p_Z(z,t)=\frac{dF_Z(z,t)}{dz}=\int _{\{Z_t=z\}} \frac{p_{\varvec{\Theta }}({\varvec{\theta }})}{|\nabla {Z}_t({\varvec{\theta }})|}d{\mathcal {H}}^{m-1}({\varvec{\theta }}). \end{aligned}$$
(22)
For all z and t, \({\{Z_t=z\}}\) is a regular hypersurface of \(\Lambda\) by (20). Moreover, in view of (5) and (6), we have
$$\begin{aligned} \int _{\{Z_t=z\}} \frac{p_{\varvec{\Theta }}({\varvec{\theta }})}{|\nabla {Z}_t({\varvec{\theta }})|}d{\mathcal {H}}^{m-1}({\varvec{\theta }})=\int _{\{Z_t=z\}}(-1)^{j-1} \frac{p_{\varvec{\Theta }}({\varvec{\theta }})}{\frac{\partial {Z_t}}{\partial {\theta _j}}}d\theta _1\wedge \ldots \wedge {d{\hat{\theta }}_j}\wedge \ldots \wedge {d\theta _m} \end{aligned}$$
(23)
if on the hypersurface \({\{Z_t=z\}}\) the inequality \(\frac{\partial {Z_t}}{\partial {\theta _j}}\ne {0}\) holds.
As will be shown later in the examples, Eqs. (18), (21) and (22), (23) can be used for the explicit calculation of the cumulative distribution function and the probability density function, respectively, even if the dimension of \(\Lambda\) is greater than one.

4 The evolution of the law of \(Z_t\)

In [2] the authors obtain a linear PDE in the unknown
$$\begin{aligned} p_{Z\Theta }(z,{\varvec{\theta }},t)=p_{Z\Theta }(z|\Theta ={\varvec{\theta }},t)p_{\Theta }({\varvec{\theta }}) \end{aligned}$$
which once solved allows to determine the probability density function \(p_Z(z,t)\) of \(Z_t\), by integrating \(p_{Z\Theta }\) over \(\Lambda\), with respect to \({\varvec{\theta }}\). Below we propose a similar method to directly determine the cumulative distribution function \(F_Z(z,t)\) of \(Z_t\). Let’s start with some notation.
Let be \(N={\mathbb {R}}\times {\Lambda }\times {D}\subset {{\mathbb {R}}^{m+2}}\), \(G(z,{\varvec{\theta }},t)=z-Z({\varvec{\theta }},t)\) and \(\chi _K\) be the indicator function of the region
$$\begin{aligned} K=\{(z,{\varvec{\theta }},t)\in {N}:G\ge {0}\} \end{aligned}$$
that is
$$\begin{aligned} \chi _K(z,{\varvec{\theta }},t)= \left\{ \begin{array}{rl} 1 &{}\text{ if } z-Z({\varvec{\theta }},t)\ge {0} \\ 0&{} \text{ otherwise } \end{array} \right. \end{aligned}$$
(24)
(note that, for each pair z and t, region \(\{Z_t\le {z}\}\) that has been defined in (19) is a subset of K). Let us denote by \({\mathcal {D}}\) the space of infinitely differentiable real functions with compact support in N.
Consider the two distributions [7] which, for every \(\phi \in {{\mathcal {D}}}\), are defined in N by
$$\begin{aligned} <\chi _K,\phi >=\int _N \chi _K({\mathbf{x}})\phi (({\mathbf{x}})d{\mathbf{x}}=\int _K\phi ({\mathbf{x}})d{\mathbf{x}}, \end{aligned}$$
with \({\mathbf{x}}\)=\((z,{\varvec{\theta }},t)\), and
$$\begin{aligned} <\delta (G),\phi >=\int _N \delta (G({\mathbf{x}}))\phi ({\mathbf{x}}) d{\mathbf{x}}=\int _{{\mathcal {I}}}\frac{\phi }{|\nabla {G}|} d{\mathcal {H}}^{m+1}, \end{aligned}$$
where
$$\begin{aligned} {\mathcal {I}}=\{(z,{\varvec{\theta }},t)\in {N}:G(z,{\varvec{\theta }},t)=0\}, \end{aligned}$$
is a regular hypersurface, because in N we have \(|\nabla {G}|=\sqrt{1+|\nabla {Z}|^2}\ge {1}\). Note that while the first distribution is regular, the second is concentrated on a set that is negligible with respect to the Lebesgue measure.
It can be proved [7] that \(\delta (G)\) is the distributional derivative of \(\chi _K\), i.e. the distribution such that
$$\begin{aligned} \frac{\partial {\chi _K}}{\partial {z}}=\delta (G) \qquad \frac{\partial {\chi _K}}{\partial {\theta _j}}=-\frac{\partial {Z}}{\partial {\theta _j}}\delta (G) \qquad \frac{\partial {\chi _K}}{\partial {t}}=-\frac{\partial {Z}}{\partial {t}}\delta (G) \end{aligned}$$
(25)
or also
$$\begin{aligned} \nabla {\chi _K}=\delta ({G})\nabla {G}. \end{aligned}$$
From (25)\(_1\) to (25)\(_3\) we deduce the PDE
$$\begin{aligned} \frac{\partial {\chi _K(z,{\varvec{\theta }},t)}}{\partial {t}}+\frac{\partial {Z({\varvec{\theta }},t)}}{\partial {t}}\frac{\partial {\chi _K(z,{\varvec{\theta }},t)}}{\partial {z}}=0 \end{aligned}$$
(26)
with the initial conditions
$$\begin{aligned} \chi _K(z,{\varvec{\theta }},0)= \left\{ \begin{array}{rl} 1 &{}\text{ if } z\ge {z_0} \\ 0&{} \text{ otherwise } \end{array} \right. \end{aligned}$$
which follows from (24) to (12).
Once \(\chi _K\) has been calculated, \(F_Z\) is obtained from the relation
$$\begin{aligned} F_Z(z,t)=\int _{\Lambda } \chi _{K}(z,{\varvec{\theta }},t) p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}. \end{aligned}$$
(27)
In a similar way, [7] it is defined the distributional derivative \(\delta ^{\prime }\) of \(\delta\) and it is shown that it results
$$\begin{aligned} \frac{\partial {\delta (G)}}{\partial {z}}=\delta ^{\prime }(G), \qquad \frac{\partial {\delta (G)}}{\partial {\theta _j}}=-\frac{\partial {Z}}{\partial {\theta _j}}\delta ^{\prime }(G), \qquad \frac{\partial {\delta (G)}}{\partial {t}}=-\frac{\partial {Z}}{\partial {t}}\delta ^{\prime }(G) \end{aligned}$$
from which the PDE that has been proposed in [2] can be obtained, i.e.
$$\begin{aligned} \frac{\partial {p_{Z\varvec{\Theta }}(z,{\varvec{\theta }},t)}}{\partial {t}}+\frac{\partial {Z({\varvec{\theta }},t)}}{\partial {t}}\frac{\partial {p_{Z\varvec{\Theta }}(z,\theta ,t)}}{\partial {z}}=0. \end{aligned}$$
(28)
The corresponding initial condition is
$$\begin{aligned} p_{Z\varvec{\Theta }}(z,{{\varvec{\theta }}},0)=\delta {(z-z_0)}p_{\varvec{\Theta }}({{\varvec{\theta }}}) \qquad \text{ or } \qquad p_{Z\varvec{\Theta }}(z,\theta ,0)=p_{Z_0}(z)p_{{\varvec{\Theta }}}({{\varvec{\theta }}}), \end{aligned}$$
depending on whether \(z_0\) is deterministic or not. Once \(p_{Z\varvec{\Theta }}\) has been calculated, \(p_Z\) is obtained from the relation
$$\begin{aligned} p_Z(z,t)=\int _{\Lambda } p_{Z{\varvec{\Theta }}}(z,{\varvec{\theta }},t) d{\varvec{\theta }}. \end{aligned}$$
(29)
Thus \(\chi _K\) and \(p_{Z\varvec{\Theta }}\) satisfy the same distributional PDE but with different initial conditions.
In applications, system (11) and Eqs. (26) [or (28)] are both solved numerically, after which \(F_Z(z,t)\) is determined by the relation (27) [or \(p_Z(z,t)\) is determined by (29)]. Two examples are presented below where the whole procedure can be followed explicitly. Let’s briefly mention the solution technique for Eq. (26). The analogue for Eq. (28) is dealt with in detail in [2].
Using the method of characteristics [12] for this purpose, we can write the following system of ODEs, with the corresponding initial conditions
$$\begin{aligned} \begin{array}{cc} \frac{dz}{ds}=\frac{\partial {Z({\varvec{\theta }},t)}}{\partial {t}} &{} z(0)=\tau , \\ \frac{dt}{ds}=1 &{} t(0)=0, \\ \frac{d{\chi _K}}{ds}=0 &{} \chi _K(\tau ,{\varvec{\theta }},0)=\left\{ \begin{array}{rl} 1 &{}\text{ if } \tau \ge {z_0}, \\ 0&{} \text{ otherwise }. \end{array} \right. \end{array} \end{aligned}$$
(30)
From (30)\(_2\) we obtain \(t=s\), which in view of (30)\(_1\) provides the following two implications
$$\begin{aligned} \begin{array}{cc} \frac{dz}{dt}=\frac{\partial {Z({\varvec{\theta }},t)}}{\partial {t}}\Rightarrow \\ z(s,\tau )=Z({\varvec{\theta }},s)+\tau -Z({\varvec{\theta }},0)\Rightarrow \\ \tau =z-Z({\varvec{\theta }},t) +Z({\varvec{\theta }},0). \end{array} \end{aligned}$$
(31)
From (30)\(_3\) to (31), taking into account that \(Z({\varvec{\theta }},0)=z_0\) by (12), we deduce
$$\begin{aligned} \chi _K(z,{\varvec{\theta }},t)=\chi _K(\tau ,{\varvec{\theta }},0)=\left\{ \begin{array}{rl} 1 &{}\text{ if } \tau \ge {Z({\varvec{\theta }},0)}, \\ 0&{} \text{ otherwise } \end{array} \right. =\left\{ \begin{array}{rl} 1 &{}\text{ if } z-Z({\varvec{\theta }},t)\ge {0}, \\ 0&{} \text{ otherwise. } \end{array} \right. \end{aligned}$$
Finally, from (27) we obtain
$$\begin{aligned} F_Z(z,t)=\int _{\Lambda }\chi _K(z,{\varvec{\theta }},t)p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }}=\int _{\{Z_t\le {z}\}}p_{\varvec{\Theta }}({\varvec{\theta }}) d{\varvec{\theta }} \end{aligned}$$
(32)
which coincides with (18).

5 Examples

5.1 The motion of a material point

Let us consider the rectilinear motion of a material point whose velocity is proportional to the traveled distance. The proportionality coefficient and the initial position are expressed by the two independent positive-valued random variables \(\Theta _1\) and \(\Theta _2\), respectively. Distance x is measured in meters and time t in seconds, and a superimposed dot indicates derivation with respect to time. We can write
$$\begin{aligned} {\dot{x}}=\theta _1x, \qquad x(0)=\theta _2 \end{aligned}$$
and
$$\begin{aligned} {\varvec{\theta }}=(\theta _1,\theta _2)\in {\Lambda }={\mathbb {R}}^+\times {\mathbb {R}}^+, \qquad t\in {D}=[0,\infty ). \end{aligned}$$
By separating the variables we obtain
$$\begin{aligned} \ln \left( {\frac{x}{\theta _2}}\right) =\theta _1t, \end{aligned}$$
from which, assuming that \(\psi\) is the identity function,
$$\begin{aligned} Z({\varvec{\theta }},t)=x({\varvec{\theta }},t)=\theta _2e^{\theta _1t} \qquad \text{ and } \qquad \frac{\partial {Z}}{\partial {t}}=\theta _1\theta _2e^{\theta _1t} \end{aligned}$$
follow. As
$$\begin{aligned} \nabla {Z}({\varvec{\theta }},t)= \left( \begin{array}{cc} \frac{\partial {Z}}{\partial {\theta _1}},\frac{\partial {Z}}{\partial {\theta _2}} \end{array} \right) =e^{\theta _1t}(\theta _2t,1), \end{aligned}$$
from (21) we obtain the cumulative distribution function of \(Z_t\)
$$\begin{aligned} F_Z(z,t)=\int _{-\infty }^z d\zeta \int _{\{Z_t=\zeta \}} \frac{p_{\varvec{\Theta }}}{e^{\theta _1t}\sqrt{1+\theta _2^2t^2}} d{\mathcal {H}}^1({\varvec{\theta }}), \end{aligned}$$
where \(p_{\varvec{\Theta }}\) is the joint probability density function of the random variables \(\Theta _1\) and \(\Theta _2\).
Alternatively, relation (18) can be used,
$$\begin{aligned} F_Z(z,t)=\int _{\{Z_t\le {z}\}} p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }}. \end{aligned}$$
Particularly, we assume that \(\Theta _1\) and \(\Theta _2\) are two independent variables, each uniformly distributed on the interval [0, 1], so that \(p_{\varvec{\Theta }}\) is the indicator function of the square \(Q=[0,1]\times [0,1]\). Then, we obtain (Fig. 1)
$$\begin{aligned} F_Z(z,t)= & {} \int _{\{Z_t\le {z}\}\cap {Q}} p_{\varvec{\Theta }}({\varvec{\theta }})d{\varvec{\theta }} \\= & {} \left\{ \begin{array}{ll} \int _0^1 ze^{-\theta _1t} d\theta _1=\frac{z}{t}(1-e^{-t}) &{} \text{ if } 0<z\le {1},\\ \ln {z}/t+\int _{\frac{\ln {z}}{t}}^1 ze^{-\theta _1t}d\theta _1=\frac{1}{t}(\ln {z}+1-ze^{-t}) &{} \text{ if } 1<z\le {e^t},\\ 1 &{} \text{ if } z>e^t, \end{array} \right. \end{aligned}$$
from which, deriving with respect to z, we deduce the probability density function
$$\begin{aligned} p_Z(z,t)= \left\{ \begin{array}{ll} \frac{1}{t}(1-e^{-t}) &{} \text{ if } 0<z\le {1},\\ \frac{1}{t}(\frac{1}{z}-e^{-t}) &{} \text{ if } 1<z\le {e^t},\\ 0 &{} \text{ if } z>e^t. \end{array} \right. \end{aligned}$$
(33)
Obviously, the same result is obtained by directly using the relations (22) and (23). Thus for example, for \(0<z\le 1\), we have (Fig. 1)
$$\begin{aligned} p_Z(z,t)= & {} \int _{\{Z_t=z\}} \frac{p_{\varvec{\Theta }}({\varvec{\theta }})}{\nabla {Z}_t({\varvec{\theta }})} d{\mathcal {H}}^1(\theta )=\int _{\{Z_t=z\}} \frac{d\theta _2}{\frac{\partial {Z}_t}{\partial {\theta }_1}} \\= & {} \int _{ze^{-t}}^z \frac{d\theta _2}{tz}=\frac{1}{t}(1-e^{-t}), \end{aligned}$$
according to (33)\(_1\).
Equation (26) and the corresponding initial conditions take the form
$$\begin{aligned} \frac{\partial {\chi _K(z,{\varvec{\theta }},t)}}{\partial {t}}+\theta _1\theta _2e^{\theta _1t} \frac{\partial {\chi _K(z,{\varvec{\theta }},t)}}{\partial {z}}=0 \end{aligned}$$
and
$$\begin{aligned} \chi _K(z,{\varvec{\theta }},0)= \left\{ \begin{array}{ll} 1 &{} \text{ if } z\ge \theta _2, \\ 0 &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$
respectively.
The numerical solution can be accomplished by means of the following steps.
Step 1 Construct a partition of \(\Lambda =[0,1]\times [0,1]\), in a finite number N of squares \(\Lambda _i\) with center \({\varvec{\theta }}^i=(\theta _1^i,\theta _2^i)\) \((i=1,\ldots ,N)\), and assign to each \({\varvec{\theta }}^i\) the corresponding value \(p_{\varvec{\Theta }}({\varvec{\theta }}^i)=\frac{1}{N}\).
Step 2 Discretize the first quadrant of the zt plane by choosing a mesh width \(\Delta z\) and a time step \(\Delta t\), and defining the discrete mesh points \((z_j,t_n)\) by
$$\begin{aligned} z_j=j \Delta z, \qquad t_n=n\Delta t. \end{aligned}$$
Then, for each representative point \(\theta ^i\), solve the equation
$$\begin{aligned} \frac{\partial {\chi _K(z,{\varvec{\theta }}^i,t)}}{\partial {t}}+\theta _1^i\theta _2^ie^{\theta _1t} \frac{\partial {\chi _K(z,{\varvec{\theta }}^i,t)}}{\partial {z}}=0, \end{aligned}$$
with the help of the initial condition
$$\begin{aligned} \chi _K(z,{\varvec{\theta }}^i,0)= \left\{ \begin{array}{ll} 1 &{} \text{ if } z\ge \theta ^i_2, \\ 0 &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$
by using the total variation diminishing (TVD) method [2, 13].
Step 3 Finally, determine the cumulative distribution function by means of the discretized version of the relation (32), i.e. for each mesh point \((z_j,t_n)\) compute
$$\begin{aligned} F_Z(z_j,t_n)=\sum _{i=1}^{N} \frac{\chi _K(z_j,{\varvec{\theta }}^i,t_n)}{N}. \end{aligned}$$
Figs. 2 and 3 show the numerical results that have been obtained with \(\Delta z=6.25\cdot 10^{-3} m, \Delta t=6.25\cdot 10^{-4} s\). Each side of \(\Lambda\) has been divided into 40 equal sub-intervals, so that \(N=1600\).
In addition to the cumulative function, the probability density function p(zt) was also numerically calculated by means of Eqs. (28) and (29), at the same four instants previously considered. The calculations have been made for \(N=40\times 40\), \(N=80\times 80\) and \(N=160\times 160\). We used \(\Delta z= 6.25\cdot 10^{-3} m\) and \(\Delta t= 6.25\cdot 10^{-4} s\) in the first two cases, and \(\Delta z= 3.125\cdot 10^{-3} m\) and \(\Delta t= 3.125\cdot 10^{-4} s\) in the third case. The obtained results are shown in Fig. 4 for \(t=t_1\) and \(t=t_4\); the other cases are similar.
At each instant t a possible estimate of the relative error is given by the formula
$$\begin{aligned} {\delta (t)=\frac{\sqrt{\sum _{i=1}^{N}(f((z_i,t))-g(z_i,t))^2}}{\sqrt{\sum _{i=1}^{N}(f(z_i,t))^2}}} \end{aligned}$$
(34)
where the values of \(f(z_i,t)\) and \(g(z_i,t)\) are obtained explicitly and numerically, respectively.
The error was calculated, for \(0\le z \le 4\), and in all the examined cases its order of magnitude turned out to be practically constant over time. In the case of the cumulative function, the maximum error was less than \(1.7\cdot 10^{-3}\). In the case of the density function, it was less than \(1.9\cdot 10^{-1}\) for \(N=40 \times 40\), less than \(1.4\cdot 10^{-1}\) for \(N=80 \times 80\) and less than \(.9\cdot 10^{-1}\) for \(N=160 \times 160\).

5.2 The Klein–Gordon equation

Let \({\mathfrak {S}}=\{x \in {\mathbb {R}}: 0\le {x} \le {l}\}\) be the reference configuration of a string which is made of elastic material having a mass density \(\rho\) per unit length. The string is fixed at its ends and is subjected to a constant tension \(\tau\) and an additional force proportional to the displacement u. The equation of motion for oscillations of small amplitude is
$$\begin{aligned} \frac{\partial ^2{u}}{\partial {t^2}}-c^2\frac{\partial ^2{u}}{\partial {x^2}}+\alpha ^2u=0, \end{aligned}$$
(35)
where
$$\begin{aligned} c=\sqrt{\frac{\tau }{\rho }} \end{aligned}$$
(36)
and \(\alpha ^2u\) is the additional force per unit mass. This is also the Klein–Gordon equation of quantum field theory [12, 14].
The function
$$\begin{aligned} u(x,t)=Asin\left( \frac{\pi x}{l}\right) cos(\omega t) \end{aligned}$$
(37)
solves Eq. (35) for
$$\begin{aligned} \omega =\sqrt{\left( \frac{\pi c}{l}\right) ^2+\alpha ^2}, \end{aligned}$$
(38)
The boundary conditions
$$\begin{aligned} u(0,t)=u(l,t)=0 \end{aligned}$$
and the initial conditions
$$\begin{aligned} u(x,0) =Asin\left( \frac{\pi x}{l}\right) \quad \text{ and } \quad \frac{\partial {u(x,0)}}{\partial {t}}=0. \end{aligned}$$
We assume that \(\pi c/l\) and \(\alpha\) are independent random variables, which will be indicated with \(\Theta _1\) and \(\Theta _2\) respectively, and that they are uniformly distributed over \(\Lambda =[\kappa _i,\kappa _f]\times [\alpha _i,\alpha _f]\).
We choose Z as the (dimensionless) displacement \(u\left( \frac{l}{2},t\right) /A\) of the middle point of the string, so that it results
$$\begin{aligned} Z({\varvec{\theta }},t)=\cos \left( t\sqrt{\theta _1^2+\theta _2^2}\right) . \end{aligned}$$
Equation (26) becomes
$$\begin{aligned} \frac{\partial {\chi _tK(z,{\varvec{\theta }},t)}}{\partial {t}}-\left( \sqrt{\theta _1^2+ \theta _2^2}sin\left( t\sqrt{\theta _1^2+\theta _2^2}\right) \right) \frac{\partial {\chi _K(z,{\varvec{\theta }},t)}}{\partial {z}}=0, \end{aligned}$$
(39)
with the initial condition
$$\begin{aligned} \chi _K(z,{\varvec{\theta }},0)= \left\{ \begin{array}{ll} 1 &{} \text{ if } z\ge 1, \\ 0 &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$
The calculations were made for \(t=.025 s\), \(t=.05 s\), and \(t=.075 s\); we limit ourselves to describe the first case because the other two are similar. Let us denote by \(\varsigma =\alpha _f(\kappa _f-\kappa _i)\) the area of \(\Lambda\), and by
$$\begin{aligned} z_1=cos\left( t\sqrt{\kappa _f^2+\alpha _f^2}\right) , z_2=cos\left( t\kappa _f^2\right) , z_3=cos\left( t\sqrt{\kappa _i^2+\alpha _f^2}\right) \text{ and } z_4=cos(t\kappa _i) \end{aligned}$$
The values of z for which the circumference with center at the origin of the plane \((\theta _1, \theta _2)\) and radius \(cos^{-1}(z)/t\) passes through the corresponding vertices of \(\Lambda\) (Fig. 5). Then, with the help of (18), we obtain
(i) For \(z\le {z_1}\),
$$\begin{aligned} F(z,t)=0; \end{aligned}$$
(ii) For \(z_1\le {z}\le {z_2}\),
$$\begin{aligned} F(z,t)=\frac{1}{\varsigma }\left( (\alpha _f(\kappa _f-\kappa _a)-\int _{\kappa _a}^{\kappa _f}\sqrt{\frac{1}{t^2}\left( cos^{-1}(z)^2-\theta _1^2\right) }d\theta _1\right) \end{aligned}$$
where \(\kappa _a=\sqrt{\frac{1}{t^2}\big (cos^{-1}(z)^2-\alpha _f^2}\);
(iii) For \(z_2\le {z}\le {z_3}\),
$$\begin{aligned} F(z,t)=\frac{1}{\varsigma }\left( (\alpha _f(\kappa _f-\kappa _a)-\int _{\kappa _c}^{\kappa _b} \sqrt{\frac{1}{t^2}\left( cos^{-1}(z)^2-\theta _1^2\right) }d\theta _1\right) \end{aligned}$$
where \(\kappa _b=\frac{1}{t}cos^{-1}(z)\) and \(\kappa _c\) has the same expression as \(\kappa _a\);
(iv) For \(z_3\le {z}\le {z_4}\),
$$\begin{aligned} F(z,t)=1-\frac{1}{\varsigma }\int _{\kappa _i}^{\kappa _b}\sqrt{\frac{1}{t^2}(cos^{-1}(z)^2-\theta _1^2)}d\theta _1; \end{aligned}$$
(v) For \(z\ge {z_4}\)
$$\begin{aligned} F(z,t)=1. \end{aligned}$$
The following parameter values have been used: \(\kappa _i=80\), \(\kappa _f=100\), \(\alpha _i=0\) and \(\alpha _f=10 s^{-1}\), and then \(\varsigma =200 s^{-2}\). The intervals \([\kappa _i, \kappa _f]\) and \([\alpha _i, \alpha _f]\) have been divided respectively into 40 and 20 equal sub-intervals, so that the partition of \(\Lambda\) is made up of \(N=800\) rectangles. Furthermore, \(\Delta z=1.25\cdot 10^{-2}\) and \(\Delta t=7.81\cdot 10^{-5} s\) were used. The relative error calculated with the formula (34) is of the order of \(4\cdot 10^{-2}\). Fig. 6 compares the numerical results with those obtained explicitly, at the three instants considered.

6 Conclusions

From the results obtained in the cases examined it appears that the accuracy of the cumulative distribution function, calculated with the proposed method, is good. The relative numerical procedure can be implemented in any code that allows the solution of the dynamic system and can be easily extended to vector-valued random processes.

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference Zio E (2013) The Monte Carlo simulation method for system reliability and risk analysis. Springer Zio E (2013) The Monte Carlo simulation method for system reliability and risk analysis. Springer
2.
go back to reference Li J, Chen J (2009) Stochastic dynamics of structures. J. Wiley and Sons Li J, Chen J (2009) Stochastic dynamics of structures. J. Wiley and Sons
3.
go back to reference Lucchesi M, Pintucchi B, Zani N (2022) MADY - a computer code for numerical modeling masonry structures. In preparation Lucchesi M, Pintucchi B, Zani N (2022) MADY - a computer code for numerical modeling masonry structures. In preparation
4.
go back to reference Lucchesi M, Pintucchi B, Zani N (2019) The generalized density evolution equation for the dynamic analysis of slender masonry structures. Key Eng Mater 817:350–355CrossRef Lucchesi M, Pintucchi B, Zani N (2019) The generalized density evolution equation for the dynamic analysis of slender masonry structures. Key Eng Mater 817:350–355CrossRef
5.
go back to reference Lucchesi M, Pintucchi B, Zani N (2021) Effectiveness of the probability density evolution method for dynamic and reliability analyses of masonry structures. In: UNCECOMP 2021, 4th ECCOMAS thematic conference on uncertainty quantification in computational sciences and engineering. Streamed from Athens, Greece, pp 313–322 Lucchesi M, Pintucchi B, Zani N (2021) Effectiveness of the probability density evolution method for dynamic and reliability analyses of masonry structures. In: UNCECOMP 2021, 4th ECCOMAS thematic conference on uncertainty quantification in computational sciences and engineering. Streamed from Athens, Greece, pp 313–322
6.
go back to reference Cohn D (2010) Measure theory, 2nd edn. Birchhauser Cohn D (2010) Measure theory, 2nd edn. Birchhauser
7.
go back to reference Gel’fand IM, Shilov GE (2016) Generalized functions, vol I. AMS Chelsea Pub Gel’fand IM, Shilov GE (2016) Generalized functions, vol I. AMS Chelsea Pub
8.
go back to reference Lasota A, Mackey MC (1994) Chaos, fractals, and noise \(2^a\) edition. Springer Lasota A, Mackey MC (1994) Chaos, fractals, and noise \(2^a\) edition. Springer
9.
go back to reference Lee JM (2003) Introduction to smooth manifolds. Springer Lee JM (2003) Introduction to smooth manifolds. Springer
10.
go back to reference Evans LC, Gariepy RF (1996) Measure theory and fine properties of functions. CRC Evans LC, Gariepy RF (1996) Measure theory and fine properties of functions. CRC
11.
go back to reference Rudin W (1970) Real and complex analysis. McGraw-Hill Rudin W (1970) Real and complex analysis. McGraw-Hill
12.
go back to reference Zauderer E (1989) Partial differential equations of applied mathematics. Wiley Zauderer E (1989) Partial differential equations of applied mathematics. Wiley
13.
go back to reference LeVeque RJ (1992) Numerical methods for conservation laws. Birkhauser LeVeque RJ (1992) Numerical methods for conservation laws. Birkhauser
14.
go back to reference Whitham GB (1999) Linear and nonlinear waves. Wiley Whitham GB (1999) Linear and nonlinear waves. Wiley
Metadata
Title
The evolution of the law of random processes in the analysis of dynamic systems
Publication date
23-09-2022
Published in
Meccanica / Issue 10/2022
Print ISSN: 0025-6455
Electronic ISSN: 1572-9648
DOI
https://doi.org/10.1007/s11012-022-01589-3

Premium Partners