Skip to main content
Top
Published in: Journal of Scientific Computing 3/2018

Open Access 15-11-2017

Radial Basis Function Methods for the Rosenau Equation and Other Higher Order PDEs

Authors: Ali Safdari-Vaighani, Elisabeth Larsson, Alfa Heryudono

Published in: Journal of Scientific Computing | Issue 3/2018

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Meshfree methods based on radial basis function (RBF) approximation are of interest for numerical solution of partial differential equations because they are flexible with respect to the geometry of the computational domain, they can provide high order convergence, they are not more complicated for problems with many space dimensions and they allow for local refinement. The aim of this paper is to show that the solution of the Rosenau equation, as an example of an initial-boundary value problem with multiple boundary conditions, can be implemented using RBF approximation methods. We extend the fictitious point method and the resampling method to work in combination with an RBF collocation method. Both approaches are implemented in one and two space dimensions. The accuracy of the RBF fictitious point method is analyzed partly theoretically and partly numerically. The error estimates indicate that a high order of convergence can be achieved for the Rosenau equation. The numerical experiments show that both methods perform well. In the one-dimensional case, the accuracy of the RBF approaches is compared with that of the corresponding pseudospectral methods, showing similar or slightly better accuracy for the RBF methods. In the two-dimensional case, the Rosenau problem is solved both in a square domain and in an irregular domain with smooth boundary, to illustrate the capability of the RBF-based methods to handle irregular geometries.
Notes
Alfa Heryudono was supported by the European Commission CORDIS Marie Curie FP7 program Grant #235730 and National Science Foundation DMS Grant #1552238.

1 Introduction

The Rosenau equation has become an established research subject in the field of mathematical physics since its introduction in the late 80s by Philip Rosenau [31]. The equation is intended to overcome shortcomings of the already famous Korteweg–de Vries (KdV) equation [15] in describing phenomena of solitary wave interaction. Knowledge about this interaction, particularly when two or more wave packets called solitons are colliding with one another, is indispensable in digital transmission through optical fibers. As data carriers, we need solitons that interact “cleanly” in the sense that none of the solitons loose any information, shape, or other conserved quantities, when they pass through each other. One may consult [7] for a fascinating history behind this subject. The Rosenau equation in its general form is given by
$$\begin{aligned} u_t+\alpha (\mathbf {x},t)\varDelta ^2u_{t}=\nabla \cdot g(u),\quad (\mathbf {x},t)\in \varOmega \times (0,\,T], \end{aligned}$$
(1.1)
where \(\varOmega \) is a bounded domain in \(\mathbb {R}^d\) (\(d\le 3\)), the coefficient \(\alpha (x,t)\ge \alpha _0>0\) is bounded in the domain \(\varOmega \times [0,T]\), and the nonlinear function g(u) is polynomial of degree \(q+1\), \(q\ge 1\). Multiple boundary conditions are required at the boundary \(\partial \varOmega \), such as
$$\begin{aligned} u(\mathbf {x},t)&=f_1(\mathbf {x},t),\quad (\mathbf {x},t)\in \partial \varOmega \times (0,\,T], \end{aligned}$$
(1.2)
$$\begin{aligned} \frac{\partial u}{\partial n}(\mathbf {x},t)&=f_2(\mathbf {x},t),\quad (\mathbf {x},t)\in \partial \varOmega \times (0,\,T], \end{aligned}$$
(1.3)
where n is the outward normal direction from the boundary, and we need an initial condition
$$\begin{aligned} u(\mathbf {x},0)=f_0(\mathbf {x}),\quad x\in \varOmega . \end{aligned}$$
The well-posedness of the Rosenau equation has been studied theoretically by Park [27, 28]. Yet in practice, the equation poses difficulties to solve numerically due to the presence of non-linearity, high spatial derivatives, multiple boundary conditions, and mixed time and space derivatives. Numerical studies based on Galerkin formulations can be found in [5, 6, 21, 24], and numerical studies based on finite difference methods are found in [1, 4, 19, 26].
The objective of this paper is to derive numerical methods based on radial basis function (RBF) collocation methods [14, 22] for the Rosenau equation, that can be applied to problems in one, two, and three space dimensions, for non-trivial geometries. These methods will also be applicable to other higher order partial differential equations. We derive and implement a fictitious point RBF (FP–RBF) collocation method and a resampling RBF (RS–RBF) collocation method, and perform experiments in one and two space dimensions. We investigate the accuracy and behavior of the derived methods theoretically and numerically. We also compare the RBF methods with pseudospectral (PS) methods [12, 35] with respect to accuracy in one space dimension.
In this paper we are using global RBF approximation as a test case for implementation of multiple boundary conditions in general geometries. The current direction in the research on RBF approximation methods for PDEs is towards the use of localized RBF approximation methods. The main categories are stencil-based methods (RBF-FD) [2, 11] and partition of unity methods (RBF–PUM) [23, 32]. The (FP–RBF) technique should carry over in both cases, with minor differences in the implementation, whereas the (RS–RBF) method should be applicable to RBF–PUM, but not as easily to RBF-FD.
The outline of the paper is as follows: In Sect. 2, a basic RBF collocation scheme is introduced. Section 3 describes different approaches to handle the multiple boundary conditions. Then in Sect. 4 the theoretical approximation properties of the RBF method for the one-dimensional Rosenau problem are discussed, while the details of the analysis are given in Appendix A. The implementation aspects are discussed in Sect. 5, followed by numerical results in Sect. 6. Finally, Sect. 7 contains conclusions and discussion.

2 The Basic RBF Collocation Scheme

The approaches for handling multiple boundary conditions implemented in this paper are combined with an RBF collocation method. In this section, we introduce the general notation and quantities we need for RBF approximation of (time-dependent) PDEs. We start from given scalar function values \(u(x_j)\) at scattered distinct node locations \(x_j\in \mathbb {R}^d\), \(j=1,\ldots ,N\). We assume that the function is approximated by a standard RBF interpolant
$$\begin{aligned} s(x)=\sum ^{N}_{j=1}\lambda _j\phi (\Vert x-x_j\Vert ), \end{aligned}$$
(2.1)
where \(\Vert \cdot \Vert \) is the Euclidean norm, \(\phi \) is a real-valued function such as the inverse multiquadric \(\phi (r)=\frac{1}{\sqrt{\varepsilon ^2r^2+1}}\). The coefficients \(\lambda _j\in \mathbb {R}\) are determined by the interpolation conditions \(s(x_j)=u(x_j),\) \(j=1,\ldots ,N\). On matrix form we have
$$\begin{aligned} A\bar{\lambda }=\bar{u}, \end{aligned}$$
(2.2)
where the matrix elements \(A_{ij}=\phi (\Vert x_i-x_j\Vert ),~i,j=1,\ldots ,N\), the vector \(\bar{\lambda }=[\lambda _1,\ldots ,\lambda _N]^T\), and \(\bar{u}=[u(x_1),\ldots , u(x_N)]^T\). When solving a PDE, we prefer to work with the discrete function values instead of the coefficients. Using (2.1) and (2.2) together, we see that the interpolant can be written
$$\begin{aligned} s(x)=\bar{\phi }(x)\bar{\lambda }=\bar{\phi }(x)A^{-1}\bar{u}, \end{aligned}$$
(2.3)
where \(\bar{\phi }(x)=[\phi (\Vert x-x_1\Vert ), \ldots , \phi (\Vert x-x_N\Vert )]\), assuming that A is non-singular. This holds for commonly used RBFs such as Gaussians, inverse multiquadrics and multiquadrics [25, 33] for distinct node points \(x_j\). We can furthermore, use (2.3) to identify cardinal basis functions such that we can write the approximation on the FEM like form
$$\begin{aligned} s(x)=\bar{\psi }(x)\bar{u}=\sum ^N_{j=1}\psi _j(x)u(x_j), \end{aligned}$$
(2.4)
where \(\bar{\phi }(x)A^{-1}=\bar{\psi }(x)=[\psi _1(x),\ldots ,\psi _N(x)]\). Because our final target is to solve PDEs, we need to look at how to apply a linear operator \(\mathscr {L}\) to the RBF approximation to compute \(s_{\mathscr {L}}=[\mathscr {L}s(x_1),\ldots , \mathscr {L}s(x_N)]^T\). In cardinal form, we get
$$\begin{aligned} \mathscr {L}{s}(x)=\mathscr {L}\bar{\psi }(x)\bar{u} =\sum ^N_{j=1}\mathscr {L}\psi _j(x)u(x_j). \end{aligned}$$
(2.5)
Using relation (2.3), the differentiation matrix \(\varPsi _\mathscr {L}=\{\mathscr {L}\psi _j(x_i)\}_{i,j=1,\ldots ,N}\) under operator \(\mathscr {L}\) is given by
$$\begin{aligned} \varPsi _\mathscr {L}=\varPhi _\mathscr {L}A^{-1}, \end{aligned}$$
(2.6)
where \(\varPhi _\mathscr {L}=\{\mathscr {L}\phi (\Vert x_i-x_j\Vert )\}_{i,j=1,\ldots ,N}\).
For time-dependent PDE problems, we use the RBF approximation in space and then discretize the time interval. The solution u(xt) is approximated by
$$\begin{aligned} s(x,t)=\sum ^N_{j=1}\psi _j(x)u_j(t), \end{aligned}$$
(2.7)
where \(u_j(t)\approx u(x_j,t)\) are the unknown functions to determine.

3 Dealing with Multiple Boundary Conditions

If we consider the one-dimensional version of equations (1.1)–(1.3) on a symmetric interval \(x\in [-L,\,L]\) we have
$$\begin{aligned} u_t+\alpha (x,t)u_{xxxxt}= g_u(u)u_x,\quad (x,t)\in [-L,\,L]\times (0,\,T], \end{aligned}$$
(3.1)
with boundary conditions
$$\begin{aligned} u(\pm L,t)&=f_1(\pm L,t),\quad t\in (0,\,T], \end{aligned}$$
(3.2)
$$\begin{aligned} u_x(\pm L,t)&=f_2(\pm L,t),\quad t\in (0,\,T]. \end{aligned}$$
(3.3)
Even for the one-dimensional case, how to implement multiple boundary conditions for a time-dependent global collocation problem is not obvious. In our case, we need to enforce two boundary conditions at each end point resulting in a total of four boundary conditions at the two boundary points. Collocating the PDE at all interior node points leads to a situation where we have more equations than node points. That is, unless we accept an overdetermined system, we need to either increase the number of degrees of freedom or discard some of the equations.
In fact, the subtleties of implementing multiple BCs are not tied to RBF methods. They have been actively researched in conjunction with other global collocations methods, particularly pseudospectral methods, since the 70s. We list the following five popular methods:
1.
Mixed hard and weak BCs [17]
 
2.
Spectral penalty method [16]
 
3.
Transforming to lower order system [34]
 
4.
Fictitious/ghost point method [13]
 
5.
Resampling method [9]
 
In this paper, we only consider methods (3)–(5) as we currently do not have a way to find penalty parameters for methods (1) and (2) that give numerically stable solutions.

3.1 Transforming to Lower Order System

A common approach when solving PDEs containing high order derivatives is to transform them into a system with lower order derivatives. By letting \(w = u_x\), the Rosenau equation (3.1) is transformed into
$$\begin{aligned} u_t + \alpha (x,t)w_{xxxt}&= wg_u(u)\\ w_t - u_{xt}&= 0 \end{aligned}$$
with boundary conditions \(u(\pm L,t)=f_1(\pm L,t)\) and \(w(\pm L,t)=f_2(\pm L,t)\). The advantage of this method is that the Neumann conditions for u at the boundaries become Dirichlet conditions for w. However, the system to solve becomes twice as large, as we need a total of 2N degrees of freedom for u and w. Due to this reason, and especially for global RBFs where differentiation matrices are dense, we are not pursuing this method. However, for localized RBF methods, where differentiation matrices are sparse, this method may still be worth trying.

3.2 Fictitious Point Method

Fictitious or ghost point methods have been commonly used as a way to enforce multiple boundary conditions in finite difference methods. The implementation for global collocation methods such as pseudospectral methods is due to Fornberg [13].
Let \(-L=x_2<x_3<\cdots < x_{N-1}=L\) be distinct node points. The Dirichlet conditions (1.2) can be imposed by fixing the values for \(u(x_2)\) and \(u(x_{N-1})\), but for the Neumann conditions (1.3) we use the fictitious point approach proposed by Fornberg [13], and introduce two additional points at some arbitrary locations denoted by \(x_1\) and \(x_{N}\).
We introduce an RBF approximation s(xt) as in (2.7), extended to include the fictitious points, for the spatial approximation of the solution u(xt),
$$\begin{aligned} s(x,t)=\sum ^{N}_{j=1}\psi _j(x)u_j(t). \end{aligned}$$
(3.4)
Loosely following the fictitious point approach, we will modify this ansatz so that the boundary conditions are fulfilled. Conditions (1.2) are easily fulfilled by replacing \(u_j(t)\) with \(f_1(x_j,t)\) for \(j=2,N-1\). For the conditions (1.3), we need to formally solve a linear system. Define the vectors \(S_f=[u_1(t),\, u_{N}(t)]^T\) with values at the two fictitious points, and \(S_d=[u_3(t),\ldots , u_{N-2}(t)]^T\) containing the approximate solution values at points in the interior of the domain, then we have
$$\begin{aligned} \underbrace{ \left( \begin{array}{cc} \psi _1^\prime (x_2) &{} \psi _{N}^\prime (x_2)\\ \psi _1^\prime (x_{N-1}) &{} \psi _{N}^\prime (x_{N-1}) \end{array} \right) }_{B_f} S_f&+ \underbrace{ \left( \begin{array}{ccc} \psi _3^\prime (x_2) &{} \cdots &{} \psi _{N-2}^\prime (x_2)\\ \psi _3^\prime (x_{N-1}) &{} \cdots &{} \psi _{N-2}^\prime (x_{N-1}) \end{array} \right) }_{B_d} S_d \nonumber \\&+\underbrace{ \left( \begin{array}{cc} \psi _2^\prime (x_2) &{} \psi _{N-1}^\prime (x_2)\\ \psi _2^\prime (x_{N-1}) &{} \psi _{N-1}^\prime (x_{N-1})\\ \end{array} \right) }_{B_b} F_1(t) = F_2(t), \end{aligned}$$
(3.5)
where \(F_j(t)=[f_j(x_2,t),\,f_j(x_{N-1},t)]^T\). Inserting the boundary values \(F_1(t)\) and the expression we get for \(S_f\) by solving (3.5) into (3.4) leads to
$$\begin{aligned} s(x,t)&= \left( [\psi _3(x),\ldots ,\psi _{N-2}(x)]-[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}B_d\right) \,S_d \nonumber \\&\quad +\,\left( [\psi _2(x),\, \psi _{N-1}(x)]-[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}B_b \right) F_1(t)\nonumber \\&\quad +\,[\psi _1(x),\, \psi _{N}(x)]\,B_f^{-1}F_2(t). \end{aligned}$$
(3.6)
This expression is awkward to work with directly. We introduce the shorthand notation
$$\begin{aligned} s(x,t)= \sum ^{N-2}_{j=3}\tilde{\psi }_j(x)u_j(t)+F(x,t), \end{aligned}$$
(3.7)
where \(\tilde{\psi }_j(x)\) and F(xt) can be directly identified from (3.6). In this simple two point boundary case, we can actually derive the explicit form of the modified basis for illustration. This yields
$$\begin{aligned} \tilde{\psi }_j(x)&=\psi _j(x)- \frac{\psi _{N}^\prime (x_{N-1}) \psi _{j}^\prime (x_2)- \psi _{N}^\prime (x_2) \psi _{j}^\prime (x_{N-1}) }{ \psi _{N}^\prime (x_{N-1})\psi _1^\prime (x_2)- \psi _{N}^\prime (x_2) \psi _1^\prime (x_{N-1}) }\psi _1(x)\nonumber \\&\quad + \frac{ \psi _{1}^\prime (x_{N-1}) \psi _{j}^\prime (x_2)- \psi _{1}^\prime (x_2) \psi _{j}^\prime (x_{N-1}) }{ \psi _{N}^\prime (x_{N-1}) \psi _1^\prime (x_2) - \psi _{N}^\prime (x_2) \psi _1^\prime (x_{N-1}) } \psi _{N}(x). \end{aligned}$$
(3.8)
In order to use the RBF approximation (3.7) for a PDE problem, we need to compute the effect of applying a spatial differential operator \(\mathscr {L}\) at the interior node points. That is, we need a method to evaluate \(\mathscr {L}\tilde{\psi }_j(x_i)\), \(i,j=3,\ldots ,N-2\), and \(\mathscr {L}F(x_i,t)\), \(i=3,\ldots ,N-2\). This is done in two steps. First, we use (2.6) to compute \(\varPsi _\mathscr {L}\) for interior node points \(x_i\), \(i=3,\ldots ,N-2\). Note however that we include all basis functions \(\psi _j(x)\), \(j=1,\ldots ,N\). Then we extract the columns pertaining to the fictitious points into \(\varPsi _{\mathscr {L},f}\), the columns pertaining to the boundary points into \(\varPsi _{\mathscr {L},b}\), and the remaining columns into \(\varPsi _{\mathscr {L},d}\). Then the modified differentiation matrix and the contribution in the forcing function can be computed as
$$\begin{aligned} \tilde{\varPsi }_{\mathscr {L}}= & {} \varPsi _{\mathscr {L},d}-\varPsi _{\mathscr {L},f}B_f^{-1}B_d, \end{aligned}$$
(3.9)
$$\begin{aligned} {}[F_{\mathscr {L}}(x_3,t),\ldots ,F_{\mathscr {L}}(x_{N-2},t)]^T= & {} (\varPsi _{\mathscr {L},b}-\varPsi _{\mathscr {L},f}B_f^{-1}B_b)F_1(t)+\varPsi _{\mathscr {L},f}B_f^{-1}F_2(t). \nonumber \\ \end{aligned}$$
(3.10)
Note that from (2.6) and the definitions above, if no operator is applied, we have \(\varPsi _{I,d}=I\) and \(\varPsi _{I,f}=\varPsi _{I,b}=0\) leading to \(F(x_i,t)=F_t(x_i,t)=0\). We use this to simplify all subsequent expressions where the RBF approximation (3.7) or its time derivative are evaluated at the node points.
Collocating the modified RBF approximation (3.7) with the PDE (1.1) at the node points leads to the system of ODEs
$$\begin{aligned} u_i^\prime (t)&+\sum _{j=3}^{N-2}\alpha (x_i,t)\frac{d^4 \tilde{\psi }_j}{d x^4}(x_i)u_j^\prime (t)=\sum _{j=3}^{N-2}g_u(u_i(t))\frac{d \tilde{\psi }_j}{d x}(x_i)u_j(t) \nonumber \\&-\alpha (x_i,t) F_{xxxxt}(x_i,t)+g_u(u_i(t))F_x(x_i,t), \quad i=3,\ldots ,N-2. \end{aligned}$$
(3.11)
In matrix form, we get the method of lines formulation
$$\begin{aligned} \underbrace{(I+A_\alpha (t)\tilde{\varPsi }_{xxxx})}_{Q(t)}S_d^\prime =\underbrace{G_u(S_d)\tilde{\varPsi }_x}_{D(S_d)}S_d+\underbrace{G_u(S_d)F_x(t) - A_\alpha (t)F_{xxxx}^\prime (t)}_{F(S_d,t)}, \end{aligned}$$
(3.12)
where the diagonal coefficient matrices are
$$\begin{aligned} A_\alpha (t)= & {} \mathrm {diag}(\alpha (x_3,t),\ldots ,\alpha (x_{N-2},t)),\\ G_u(S_d)= & {} \mathrm {diag}(g_u(u_3(t)),\ldots ,g_u(u_{N-2}(t))), \end{aligned}$$
and the vectors in the right hand side are defined as \(F_\mathscr {L}(t)=[F_\mathscr {L}(x_3,t),\ldots ,F_\mathscr {L}(x_{N-2},t)]^T\). The problem (3.12) can be solved by employing a solution method for nonlinear ODE systems.
The mass matrix Q(t) is in general invertible but non-singularity cannot be guaranteed. Kansa [20] argued that if the centers of the RBFs are distinct and the PDE problem is well-posed, matrices discretizing spatial operators are generally found to be non-singular. Hon and Schaback [18] showed that occurrences of singular matrices are very rare, but do exist. When \(\alpha (x,t)\) is constant, the mass matrix is constant over time. Then we can LU-factorize the matrix once and use this factorization throughout the time stepping algorithm.
An alternative to the derivation above is to use the original cardinal basis functions, and include the boundary condition equations in the final system. Define rectangular identity matrices \(I_k\) such that \(I_k(S_d,\,S_b,\,S_f)^T=S_k\), for \(k=d,b,f\). Also, we order the columns in the differentiation matrices in accordance with the order of the unknowns such that \(\varPsi _\mathscr {L}=[\varPsi _{\mathscr {L},d},\,\varPsi _{\mathscr {L},b},\varPsi _{\mathscr {L},f}]\). Then we can express (3.12) as
$$\begin{aligned} \left( \begin{array}{c} I_d+A_{\alpha }\varPsi _{xxxx} \\ 0\\ 0 \end{array}\right) \left( \begin{array}{c} S_d^\prime \\ S_b^\prime \\ S_f^\prime \end{array}\right) = \left( \begin{array}{c} G_u(S_d)\varPsi _x\\ I_b\\ B_d\ \ B_b\ \ B_f \end{array}\right) \left( \begin{array}{c} S_d\\ S_b\\ S_f \end{array}\right) - \left( \begin{array}{c} 0\\ F_1(t)\\ F_2(t) \end{array}\right) . \end{aligned}$$
(3.13)
In this case, the mass matrix is singular and then a differential algebraic solvers is required to compute the solution of the resulting system of differential algebraic equations (DAE).

3.3 Resampling Method

In the resampling method, we do not add any points as for the fictitious point method of the previous section. The four boundary conditions are still enforced at the boundary points as algebraic equations, but the PDE is instead collocated at \(N-4\) auxiliary interior points. Let the solution u(xt) be approximated in Lagrange form by
$$\begin{aligned} s(x,t)=\sum ^{N}_{j=1}\psi _j(x)u_j(t), \end{aligned}$$
(3.14)
where \(x_1\) and \(x_N\) are boundary points and \(x_2,\ldots ,x_{N-1}\) are interior points. The four algebraic equations arising from the boundary conditions are
$$\begin{aligned} u_1(t) = f_1(x_1,t),&\quad u_N(t) = f_1(x_N,t), \end{aligned}$$
(3.15)
$$\begin{aligned} \sum _{j=1}^{N}\frac{d\psi _j}{dx}({x}_1)u_j(t) = f_2(x_1,t),&\quad \sum _{j=1}^{N}\frac{d\psi _j}{dx}({x}_N)u_j(t) = f_2(x_N,t). \end{aligned}$$
(3.16)
To write the equations on matrix form, we again divide the unknown functions into parts, \(S_d\) for interior node points and \(S_b\) for boundary node points. Then the boundary conditions can be expressed as
$$\begin{aligned} \left( \begin{array}{cc} 0 &{} I_b\\ \tilde{B}_d &{} \tilde{B}_b \end{array}\right) \left( \begin{array}{c} S_d\\ S_b\\ \end{array}\right) = \left( \begin{array}{c} F_1(t)\\ F_2(t) \end{array}\right) , \end{aligned}$$
(3.17)
where the boundary condition matrices \(\tilde{B}_d\) and \(\tilde{B}_b\) are analogous to those in the previous section, but with slightly different basis functions as there are no fictitious points. We have N unknown functions, and we have used four equations for the boundary conditions. This means that we need \(N-4\) additional equations. The idea in the resampling method is to collocate the PDE at \(N-4\) auxiliary interior points, instead of collocating at the node points. We define the auxiliary points \(\tilde{x}_i\), \(i=1,\ldots ,N-4\) and collocate the PDE to get the equations
$$\begin{aligned} \sum _{j=1}^{N}\left[ \psi _j(\tilde{x}_i)+\alpha (\tilde{x}_i,t)\frac{d^4 \psi _j}{d x^4}(\tilde{x}_i)\right] u_j^\prime (t)=\sum _{j=1}^{N}\left[ g_u\left( \sum _{k=1}^{N}\psi _k(\tilde{x}_i)u_k(t)\right) \frac{d \psi _j}{d x}(\tilde{x}_i)\right] u_j(t), \end{aligned}$$
(3.18)
Define the resampling matrices \(\varPsi _\mathscr {L}^R=\{\mathscr {L}\psi _j(\tilde{x}_i)\}_{i=1,\ldots ,N-4,\, j=2,\ldots ,N-1,1,N}\) (columns ordered according to the unknowns). The resampled equations (3.18), together with the algebraic equations (3.17), yield an \(N \times N\) DAE of index 1,
https://static-content.springer.com/image/art%3A10.1007%2Fs10915-017-0598-1/MediaObjects/10915_2017_598_Equ29_HTML.gif
(3.19)
where \(\tilde{A}_{\alpha }=\mathrm {diag}(\alpha (\tilde{x}_1,t),\ldots ,\alpha (\tilde{x}_{N-4},t) )\) and \(\tilde{S}=\varPsi ^R\left( \begin{array}{c} S_d\\ S_b\\ \end{array}\right) \).
The system of equations (3.19) can be solved using a differential algebraic solver. See DASPK [3, 29]. An example of how this can be implemented in MATLAB is given in Sect. 5.

3.4 Generalization to More Space Dimensions

The main differences when moving to more than one space dimensions is that we have a boundary curve or a boundary surface that is discretized in the same way as the interior of the domain instead of just two boundary points. The formulation of the two methods is in all essential parts the same, and the formulations (3.13) and (3.19) are valid in the same form, but when we before had two boundary points and two fictitious points, we instead have \(N_b\) boundary points and \(N_b\) fictitious points. Similarly, for the resampling method, we have \(2N_b\) boundary conditions, and therefore we collocate the PDE at \(N-2N_b\) auxiliary points. Experiments for problems in two space dimensions are presented in Sect. 6.

4 Error Estimates

We have derived semi-discrete error estimates for the one-dimensional Rosenau problem and the FP–RBF approach. The details of the analysis are provided in Appendix A. Exponential convergence estimates for the spatial part of the error are based on theory for global RBF interpolants [30], and expressed in terms of the fill distance
$$\begin{aligned} h=\sup _{x\in \varOmega } \min _{x_j\in X}\Vert x-x_j\Vert . \end{aligned}$$
(4.1)
The estimates for the error growth in time are more problematic. The global error estimate is given by
$$\begin{aligned} \Vert E(t)\Vert _{\infty }\le C(t) e^{-\frac{\gamma }{\sqrt{h}}}e^{C_3(t)} \max _{0\le \tau \le t}\Vert u(\tau )\Vert _{\mathscr {N}(\varOmega )}, \end{aligned}$$
(4.2)
where the norm \(\Vert \cdot \Vert _{\mathscr {N}(\varOmega )}\) is the so called native space norm connected with the chosen type of RBF. This estimate would be fine if the function \(C_3(t)\) was small. This part of the estimate is connected with the non-linear term and has the form
$$\begin{aligned} C_3(t) =\tilde{C}_q\Vert Q^{-1}\Vert \max (\Vert \tilde{\varPsi }_x\Vert ,\Vert B_x\Vert )r(t), \end{aligned}$$
(4.3)
where the function r(t) represents a polynomial growth factor in time. The function \(C_3(t)\) becomes large due to the matrix norms. When estimated separately like this, the product \(\Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \) becomes large, growing as \(\mathscr {O}(h^{-1})\) (cf. Sect. A.5). The way we have derived the estimates does not easily allow for an estimate that takes the norm of the product \(\Vert Q^{-1}\tilde{\varPsi }_x\Vert \) instead. However, this is in principle the way the matrices appear in the critical terms, and if the product norm is investigated numerically, it turns out to be small.
Because of this overestimate of the time growth, the error estimates are not quantitatively useful, but they provide a qualitative insight into how the different errors interact, and illustrate the difficulties of bounding the non-linear terms in an effective way.

5 MATLAB Implementation

In this section, sample MATLAB implementations of the fictitious point and resampling RBF methods for the one-dimensional Rosenau equation (3.1)–(3.3) are presented and discussed. We use an example with a known solution. For \(g(u)=10u^3-12u^5-\frac{3}{2}u\) and \(\alpha (x,t)=0.5\) it holds that \(u(x,t)= sech (x-t)\) is a solution [28]. For both methods, equally spaced nodes are used, and the spatial domain is \([-L,L]\).

5.1 Implementation of the Fictitious Point Method

Following the idea of the fictitious point method in Sect. 3.2, we complement the interior and boundary RBF nodes with two (the number of boundary nodes) fictitious points outside the computational domain, see Fig. 1 for an illustration.
We generate the differentiation matrices using the modified basis functions according to Equation (3.9). Collocating the Rosenau equation by applying the modified differentiation matrices leads to ODE system (3.12), which we here solve by using the built-in MATLAB ODE solver ode15s. The two functions below constitute a complete MATLAB implementation of the problem.
It can be noted that when we use the modified basis functions, we need to provide the time derivatives of the boundary conditions as well as the boundary conditions themselves. This is not needed with the alternative formulation (3.13), but instead the system is stated in DAE form.

5.2 Implementation of the Resampling RBF Method

For the resampling method, we start directly from the DAE form derived in Sect. 3.3, where the four boundary conditions enforced at the boundary points constitute the algebraic part. The \(N-4\) auxiliary points where the PDE is enforced are uniformly distributed in the interior of the computational domain and do not in general coincide with the RBF node points where the solution is sought. We organize the solution vector as \(S=[S_d ~S_b]^T\), where as before, \(S_d\) contains solution values at the interior RBF nodes, and \(S_b\) contains the two boundary values. Then the DAE discretization scheme can be schematically be displayed as
https://static-content.springer.com/image/art%3A10.1007%2Fs10915-017-0598-1/MediaObjects/10915_2017_598_Equ84_HTML.gif
where \(\varPsi ^R\) is an \(N-4 \times N\) resampling matrix that provides values at the auxiliary points given values at the node points, \(\tilde{S}=\varPsi ^RS\), \(G_u(\tilde{S})\) is an \((N-4) \times (N-4)\) diagonal matrix, \(\varPsi ^R_{x}\) and \(\varPsi ^R_{xxxx}\) are \(N-4 \times N\) resampled first and fourth derivative matrices respectively. The derivative boundary condition matrix \(\tilde{B}=(\tilde{B}_d~~\tilde{B}_b)\) is a \(2\times N\) matrix, see Eq. (3.17) for details.
Both ODEs and DAEs of index 1 can be solved in MATLAB using ode15s, previously employed for the fictitious point method. One may also use the syntactically similar open source software OCTAVE and there use dasspk as DAE solver. The following two functions show the MATLAB implementation of the resampling RBF method.

6 Numerical Results

In this section, we perform numerical experiments to investigate the accuracy and convergence of the proposed schemes. Both one-dimensional and two-dimensional test cases are considered. In all tests, the inverse multiquadric RBF is used. The shape parameter values are chosen using a parametric relation such that they fall within the region where the method is stable, while making the solution as accurate as possible. For the one-dimensional test case, we compare the results with those of a pseudo-spectral resampling (RS–PS) and a pseudo-spectral fictitious point (FP–PS) method. We have not included the code here, but it can be downloaded from the authors’ web pages.

6.1 The One-Dimensional Case

We consider the same test problem as in the sample implementations in Sect. 5, with \(g(u)=10u^3-12u^5-\frac{3}{2}u\) and known exact solution \(u(x,t)=\text {sech}(x-t)\). Equispaced node distributions over the interval \([-L,\,L]\) are used for the RBF methods. The total number of points N includes the fictitious points in the FP–RBF case. The initial and boundary conditions are taken from the exact solution.
The exact solution over the interval \([-L,\,L]\) for \(L=1\) and \(L=10\) is shown at different times t together with the numerical solution using the FP–RBF method in Fig. 2. The solution is a pulse that travels to the right with time.
In Fig. 2, a shape parameter \(\varepsilon ^{}=\frac{0.08}{h}\) is used. This relation is experimentally determined to ensure stable computations and high solution accuracy. Figure 3 shows how the errors of the two RBF methods vary with \(\varepsilon ^{}\). Using the formula leads to \(\varepsilon ^{}=1.16\) and \(\varepsilon ^{}=0.4\) for the two cases, which is within the best region for each method. It can be noted that the good range of \(\varepsilon ^{}\) is narrower for the resampling method and that both methods rapidly become ill-conditioned when \(\varepsilon ^{}\) is too small.
To illustrate the capability of the proposed methods, we start with comparing the approximation accuracy with that of the corresponding pseudo-spectral methods. The pseudo-spectral methods employ the same number of Chebyshev nodes as the number of uniform nodes used by the RBF methods. For a description of the implementations, see [9, 13]. The absolute errors for two different values of L are plotted in Fig. 4. As shown in the figure, the errors of both the RBF based methods and the pseudo-spectral methods are similar in magnitude. For the shorter interval, the RS–PS method has smaller errors near the boundaries, which is consistent with the clustering of the Chebyshev nodes. However, for the larger interval, where the solution is small at the boundary, this effect is not visible.
The \(L_{\infty }\) errors over time for the approximated solutions are illustrated in Fig. 5. If we consider the global error estimate (A.41), and insert \(q=4\) (for this test case), the exponential time-dependent growth factor becomes \(\exp (C_3((1+t)^{1.12}-1)/1.12)\). We do not know the precise value of \(C_3\), but based on our experiments a value larger than one should be expected, in which case the predicted growth would be at least two orders of magnitudes larger than what is observed. However, as discussed in Sect. 4, this is expected to be an overestimate of the true error growth. Both the accuracy and the growth rate of the errors of the four different solutions are similar. For the shorter interval \(L=1\), the RS–RBF method is slightly worse than the other three methods, while for the longer interval \(L=10\), both RBF methods are slightly more accurate than the pseudo-spectral methods for longer times.
Figure 6 displays the convergence behavior as a function of N for the two RBF methods compared with the PS methods. For all four methods, the highest attainable accuracy is almost the same. When \(\varepsilon ^{}h\) is constant, as in this experiment, we would expect the error to reach a saturation level, but accuracy is also limited by conditioning, and this may be the effect that we see here. In both cases, the FP–RBF method reaches the highest accuracy faster than the RS–RBF method. The PS methods performs best for the short interval, and performs worse than the RBF methods for the longer interval. One explanation for this can be that the node density for the Chebyshev nodes compared with the uniform nodes is lower in the interesting region (middle of the domain) in this case.

6.2 Two-Dimensional Square Domain

In this section, we demonstrate how the flexibility of the RBF approximations allows us to implement the FP–RBF method and RS–RBF method in a two-dimensional domain with a similar effort as for the one-dimensional problem. Here, we do not compare with the PS method, which is less straightforward to implement. We consider the square domain \(\varOmega =[-L,\, L]\times [-L,\, L]\) and the Rosenau equation (1.1) with \(\alpha =1\), \(g(u)=u^3+u^2\) and initial condition \(f_0(x,y)=\text {sech} (x+y)\) and boundary conditions
$$\begin{aligned} f_1(x,y,t)&=\text {sech}(x+y-t), \end{aligned}$$
(6.1)
$$\begin{aligned} f_2(x,y,t)&=-\text {sech}(x+y-t)\text {tanh}(x+y-t), \end{aligned}$$
(6.2)
where the derivative in the second condition is taken as either \(u_x\) or \(u_y\) depending on which part of the boundary is involved. For the two-dimensional test cases, we do not have any analytical solutions. The approximate solution at two different times is displayed in Fig. 7.
We start from a uniform discretization of the domain \(\varOmega \) with \(n^2\) points. We denote the number of interior points by \(N_d\) and the number of boundary points by \(N_b\). For FP–RBF we need to add \(N_b\) fictitious points outside the domain to enforce the Neumann boundary conditions. Note that if we simply choose an extension of the uniform grid, the resulting number of fictitious points is too large. The total number of points becomes \(N=N_d+2N_b=(n-2)^2+2(4n-4)=(n+2)^2-8\). For the RS–RBF method we generate \(N-2N_b\) auxiliary points inside of the domain. Note that here the number of boundary points is modified (and these are therefore not on the uniform grid) to make the total and auxiliary node numbers compatible. If we choose \(N_b=4(n-2)-4\), then the number of auxiliary points is \((n-4)^2\). The total number of node points is \(N=N_d+N_b=(n-2)^2+4n-12=n^2-8\). Sample node distributions for the two methods are displayed in Fig. 8.
According to the error estimate (A.41) for the FP–RBF method, we expect exponential convergence in \(1/\sqrt{h}\) for fixed \(\varepsilon ^{}\). In practice, we often observe exponential convergence in 1 / h. In Fig. 9, to make a fair comparison, we plot the error as a function of \(\sqrt{N}\propto 1/h\). For this range of N-values, the conditioning is low enough to not influence the error, and exponential convergence can be observed for both RBF methods. We see that the estimated slopes in the right subfigure are precisely double those in the left subfigures. If we take into account that h is also twice as large for the case \(L=2\), we can conclude that the rate of convergence in terms of 1 / h is the same in both cases.

6.3 Two-Dimensional Irregular Domain with Smooth Boundary

We now take a step further in demonstrating the flexibility of the RBF based methods by considering an irregular two-dimensional domain. As a test problem, we consider the domain with boundary defined by the parametric equation
$$\begin{aligned} r(\theta )=1+0.06(\sin (6\theta )+\sin (3\theta )),\quad \theta \in [0,2\pi ). \end{aligned}$$
(6.3)
We also need the derivative of the boundary equation in order to compute the outward normal direction \(n=(n_x,n_y)\), which is needed for the boundary conditions. We have
$$\begin{aligned} (n_x,n_y) = \frac{r^\prime (\theta )(\sin (\theta ),-\cos (\theta )) + r(\theta )(\cos (\theta ),\sin (\theta ))}{\sqrt{r^\prime (\theta )^2+r(\theta )^2}}. \end{aligned}$$
(6.4)
We use a similar test problem as for the square domain with boundary Dirichlet data
$$\begin{aligned} f_1(x,y,t)=\mathrm {sech}(x+y-t). \end{aligned}$$
(6.5)
For the normal derivative condition, we impose
$$\begin{aligned} f_2(x,y,t)=\nabla u\cdot n=-\text {sech}(x+y-t)\text {tanh}(x+y-t)(n_x+n_y). \end{aligned}$$
(6.6)
The approximate solution at three different times is shown in Fig. 10.
Sample node distributions for the FP–RBF and RS–RBF methods are illustrated in Fig. 11. Just as for the square domain, N is the total number of points, where the number of fictitious points outside the domain is \(N_b\), i.e., \(N=N_d+2N_b\), and the number of auxiliary points inside the domain in the resampling method is \(N_d-2N_b\).
The max error as function of \(\sqrt{N}\) for a fixed shape parameter value is illustrated in Fig. 12. The reference solution is computed using the FP–RBF method with \(N_d=547\), and \(N_b=84\) nodes. The max error is estimated from evaluation at 540 radially uniformly distributed points in the domain. The RS–RBF method is less accurate in this case even if it converges with a similar rate. It has also been observed in other experiments in this section that it is more sensitive to parameter values and method parameters.
Overall, the error trends are similar to those for the square domain, showing that the RBF methods provide a well functioning generalization of both the fictitious point method and the resampling method to general domains.

7 Conclusion

The Rosenau equation, which is used as an application throughout this paper, is an example of a non-linear PDE with multiple boundary conditions as well as mixed space-time derivatives. Multiple boundary conditions provides an extra challenge when solving PDE problems. The standard form of a typical collocation method assumes that one condition is imposed at each node point/grid point. Hence, the additional conditions at the boundary nodes lead to a mismatch between the number of conditions and the number of unknowns.
Two approaches to manage multiple boundary conditions that have been introduced for spectral methods are fictitious point methods and resampling methods. In this paper we have shown how to implement these approaches in the context of RBF collocation methods. From numerical experiments for a one-dimensional test problem, we could see that the behavior of the method with respect to accuracy in space and time is very similar to that of the corresponding pseudo-spectral method.
For two-dimensional problems, already in a regular geometry such as the square, the application of spectral methods becomes more complicated. Approximations are typically based on tensor product grids, but if we use the one-dimensional extension techniques for each grid line, again the number of extra conditions and extra points do not naturally match. The problem can for example be resolved by choosing one of the directions for the corner points, but then the approximations in the other direction needs to be of lower order.
We show that with the two RBF methods, due to the freedom of node placement, we can distribute the fictitious points or the resampled nodes uniformly and symmetrically with respect to the domain. Furthermore, we show that the concept can be transferred also to irregularly shaped domains.
We have also analyzed the theoretical properties of the fictitious point RBF approximation for the one-dimensional Rosenau equation. We could show that the spectral convergence of the spatial approximation carries over to the PDE solution, while the growth of the error in time in our estimate strongly depends on the bounds on the non-linear term.
To conclude, both the implemented approaches are promising for problems with multiple boundary conditions, especially for geometries where spectral methods cannot easily be applied. Global RBF approximations as the ones used here are competitive for problems in one or two space dimensions, but the computational cost can become prohibitive for higher-dimensional problems due to the need to solve dense linear systems. Therefore, an interesting future direction is to see how resampling and fictitious point methods can be combined with localized (stencil or partition based) RBF methods.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix

Details of Error Estimation

In this appendix, we drive the error bound of approximaion for the FP–RBF method of Sect. 3.2. For the analysis, We assume that \(\alpha >0\) is constant and that the function g is a polynomial of degree \(q+1\) with \(q\ge 1\).

ODE System for the Semi-discrete Approximation Error

We work with spatially discrete solution and approximation values evaluated at the interior node points \(x_i\), \(i=3,\ldots ,N-2\). We denote the exact solution vector and its derivatives by \(U_\mathscr {L}(t)=(\mathscr {L}u(x_3,t),\ldots ,\mathscr {L}u(x_{N-2},t))^T\). From the Rosenau equation (3.1), we have
$$\begin{aligned} U^\prime +\alpha U_{xxxx}^\prime =G_u(U)U_x, \end{aligned}$$
(A.1)
For the RBF approximation, we use (3.12), but to simplify notation we replace \(S_d\) with S and the matrix \(A_\alpha \) with the constant \(\alpha \).
$$\begin{aligned} (I+\alpha \tilde{\varPsi }_{xxxx})S^\prime +\alpha F_{xxxx}^\prime =G_u(S)\tilde{\varPsi }_{x}S+G_u(S)F_x. \end{aligned}$$
(A.2)
We introduce the auxiliary function z(xt), which interpolates the exact solution at each time and also satisfies the boundary conditions (1.2) and (1.3),
$$\begin{aligned} z(x,t)=\sum _{j=3}^{N-2}\tilde{\psi }_j(x)u(x_j,t)+F(x,t). \end{aligned}$$
(A.3)
We denote the auxiliary function vector by Z(t) and note that
$$\begin{aligned} Z_\mathscr {L}=\tilde{\varPsi }_\mathscr {L}U+F_\mathscr {L} \end{aligned}$$
(A.4)
Furthermore, \(Z(t)=U(t)\), \(Z'(t)=U'(t)\), and \(Z(0)=S(0)=U(0)\), while \(Z_\mathscr {L}\ne U_\mathscr {L}\).
We define the discrete interpolation error vector \(\varepsilon (t)=U(t)-Z(t)\). The interpolation error itself is zero at the node points, but its derivatives \(\varepsilon _\mathscr {L}\) are non-zero. By noting that \(U(t) = \varepsilon (t) + Z(t)\) and by using (A.4) for the derivatives of Z(t), we can rewrite (A.1) as
$$\begin{aligned} U^\prime +\alpha (\varepsilon _{xxxx}^\prime +\tilde{\varPsi }_{xxxx}U^\prime +F_{xxxx}^\prime )= & {} G_u(U)(\varepsilon _{x}+\tilde{\varPsi }_{x}U+F_x). \end{aligned}$$
(A.5)
Finally, we introduce the discrete error \(E(t)=U(t)-S(t)\), and subtract (A.2) from (A.5) to get
$$\begin{aligned} \underbrace{(I+\alpha \tilde{\varPsi }_{xxxx})}_{Q}E^\prime + \alpha \varepsilon _{xxxx}^\prime = G_u(U)\varepsilon _{x}+ G_u(U)\tilde{\varPsi }_{x}U -G_u(S)\tilde{\varPsi }_{x}S + (G_u(U)-G_u(S))F_x. \end{aligned}$$
(A.6)
This equation can be seen as an ODE-system for the error E(t) by writing it as
$$\begin{aligned} QE^{\prime }(t)=H(t), \end{aligned}$$
(A.7)
where \(H(t)=-\alpha \varepsilon ^{\prime }_{xxxx}(t)+ G_u(U)\tilde{\varPsi }_xU-G_u(S)\tilde{\varPsi }_xS+G_u(U)\varepsilon _x(t)+ (G_u(U)-G_u(S))F_x(t)\). In the following, we will consider H(t) as a forcing function. For a discussion of the non-singularity of Q, see Sect. 3.2. The system of ODEs (A.7) can be formally integrated to yield
$$\begin{aligned} E(t)=\int _{0}^{t}Q^{-1}H(\tau )d\tau . \end{aligned}$$
(A.8)
In the following subsections, we will look into each part of the forcing function.

Estimates for the Non-linear Term

In order to determine the influence of the non-linear term on convergence and stability, we consider the particular form \(g(u)=u^{q+1}\), \(q\ge 1\). This matches the functions typically used in the literature. Furthermore, we will use an estimate by Park from [28] showing that with this form of g(u),
$$\begin{aligned} |u(x,t)|\le C(1+t)^{1-\frac{q}{5}},\quad t > 0,\quad x\in \mathbb {R}. \end{aligned}$$
(A.9)
We can then form the following estimate
$$\begin{aligned} g_u(u)=(q+1)u^q\le \tilde{C}(1+t)^{q(1-\frac{q}{5})},\quad t > 0. \end{aligned}$$
(A.10)
Note that the exponent can never become larger than 1.25, which occurs at \(q=2.5\). For the second derivative, we have
$$\begin{aligned} \frac{dg_u}{du}(u)=q(q+1)u^{q-1} \le \tilde{\tilde{C}}(1+t)^{(q-1)(1-\frac{q}{5})},\quad t>0. \end{aligned}$$
(A.11)
If we instead consider a point s, close to u we have
$$\begin{aligned} \frac{dg_u}{du}(s)&=q(q+1)s^{q-1}=q(q+1)(u+(s-u))^{q-1}\nonumber \\&= q(q+1)\sum _{p=0}^{q-1}\left( \begin{array}{c}q-1\\ p\end{array}\right) u^{q-1-p}(s-u)^p \end{aligned}$$
(A.12)
$$\begin{aligned}&\le \frac{q!}{\lfloor \frac{q-1}{2} \rfloor !^2}\left\{ \begin{array}{ll} C(1+t)^{(q-1)(1-\frac{q}{5})}{\sum }_{p=0}^{q-1}|u-s|^p, &{} q\le 5,\\ C(1+t)^{(1-\frac{q}{5})}{\sum }_{p=0}^{q-1}|u-s|^p, &{} q>5. \end{array},\right. \quad t>0, \end{aligned}$$
(A.13)
which allows the following estimate
$$\begin{aligned} |g_u(u)-g_u(s)|\le C_q(1+t)^{\tilde{q}}\sum _{p=1}^q |u-s|^p, \end{aligned}$$
(A.14)
where \(\tilde{q}=\max ((q-1)(1-\frac{q}{5}),(1-\frac{q}{5}))\).

Term by Term Estimates for the Error Contributions

In this section, we are going to derive an estimate for the discrete approximation error. We first note that H(t) is a sum over number of terms \(H_j(t)\) and split the integral in (A.8) to get
$$\begin{aligned} \Vert E(t)\Vert _\infty =\left\| \int _{0}^{t}Q^{-1}H(\tau )d\tau \right\| _\infty \le \sum _{j}\left\| \int _{0}^{t}Q^{-1}H_j(\tau )d\tau \right\| _\infty . \end{aligned}$$
For the first error term, we have \(H_1(t)=-\alpha \varepsilon _{xxxx}^\prime (t)\). Integration leads to
$$\begin{aligned} E_1(t)&=-\alpha \int _{0}^{t}Q^{-1}\varepsilon ^{\prime }_{xxxx}(\tau )d\tau =-\alpha Q^{-1}\left( \varepsilon _{xxxx}(t)-\varepsilon _{xxxx}(0)\right) , \end{aligned}$$
(A.15)
then we can get the following estimate
$$\begin{aligned} \Vert E_1(t)\Vert _\infty&\le |\alpha |\Vert Q^{-1}\Vert _\infty \max _{0\le \tau \le t}\Vert \varepsilon _{xxxx}(\tau )\Vert _\infty . \end{aligned}$$
(A.16)
The second error term that we consider is generated by \(H_2(t)=G_u(U)\varepsilon _x(t)\), leading to
$$\begin{aligned} \Vert E_2(t)\Vert _\infty&=\left\| \int _{0}^{t}Q^{-1}G_u(U)\varepsilon _x(\tau )d\tau \right\| _\infty \nonumber \\&\le \Vert Q^{-1}\Vert _\infty \tilde{C}\max _{0\le \tau \le t}\Vert \varepsilon _{x}(\tau )\Vert _\infty \int _{0}^{t}(1+\tau )^{q(1-\frac{q}{5})}d\tau \nonumber \\&=\Vert Q^{-1}\Vert _\infty \tilde{C}\max _{0\le \tau \le t}\Vert \varepsilon _{x}(\tau )\Vert _\infty \left( \frac{(1+t)^{q(1-\frac{q}{5})+1}-1}{q(1-\frac{q}{5})+1}\right) \end{aligned}$$
(A.17)
where we used (A.10) for the term involving \(G_u(U)\).
The final part focuses on the non-linear term and is the most complicated. We start by rewriting the generating term
$$\begin{aligned} H_3(t)&=G_u(U)\tilde{\varPsi }_xU-G_u(S)\tilde{\varPsi }_xS+(G_u(U)-G_u(S))F_x(t)\nonumber \\&=G_u(U)\tilde{\varPsi }_x(U-S)+(G_u(U)-G_u(S))(F_x(t)+\tilde{\varPsi }_x(U+(S-U))). \end{aligned}$$
(A.18)
If we rewrite equation (3.10) in matrix form, the function \(F_x(t)\) can be expressed as
$$\begin{aligned} F_x(t)=B_xF(t) = \left( \varPsi _{x,b}-\varPsi _{x,f}B_f^{-1}B_b\quad \varPsi _{x,f}B_f^{-1} \right) \left( \begin{array}{c}F_1(t)\\ F_2(t)\end{array}\right) . \end{aligned}$$
(A.19)
Using this, we can provide the first estimate for contribution to the error from the term \(H_3(t)\) given by (A.18)
$$\begin{aligned}&\Vert E_3(t)\Vert _\infty \nonumber \\&\quad = \left\| \int _{0}^{t}Q^{-1}\left( G_u(U)\tilde{\varPsi }_x(U-S)+(G_u(U)-G_u(S))(B_xF(\tau )+\tilde{\varPsi }_x(U+(S-U)))\right) d\tau \right\| _\infty \nonumber \\&\quad \le \Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \int _{0}^{t}\Vert G_u(U)\Vert \Vert U-S\Vert d\tau \nonumber \\&\qquad +\Vert Q^{-1}\Vert \Vert B_x\Vert \int _{0}^{t}\Vert G_u(U)-G_u(S)\Vert \Vert F(\tau )\Vert d\tau \nonumber \\&\qquad +\Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \int _{0}^{t}\Vert G_u(U)-G_u(S)\Vert \Vert U+(S-U)\Vert d\tau . \end{aligned}$$
(A.20)
Using equations (A.9), (A.11), and (A.14) together with the inequality \(|U+(S-U)| \le |U|+|S-U|\) yields
$$\begin{aligned} \Vert E_3(t)\Vert _\infty&\le \Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \int _{0}^{t}\tilde{C}(1+\tau )^{q(1-\frac{q}{5})} \Vert U-S\Vert d\tau \nonumber \\&\quad +\Vert Q^{-1}\Vert \Vert B_x\Vert \int _{0}^{t}C_q(1+\tau )^{\tilde{q}}\Vert F(\tau )\Vert \sum _{p=1}^{q}\Vert U-S\Vert ^p d\tau \nonumber \\&\quad +\Vert Q^{-1}\Vert \Vert \tilde{\varPsi }_x\Vert \int _{0}^{t}C_q(1+\tau )^{\tilde{q}}\sum _{p=1}^{q}\Vert U-S\Vert ^p\left( C(1+\tau )^{(1-\frac{q}{5})}+\Vert U-S\Vert \right) d\tau . \end{aligned}$$
(A.21)
Because this term contains the error in the right hand side, it is the most difficult one to include in the error estimate. We simplify it as far as possible. First we write
$$\begin{aligned} \Vert E_3(t)\Vert _\infty&\le \Vert Q^{-1}\Vert \max (\Vert \tilde{\varPsi }_x\Vert ,\Vert B_x\Vert ) \int _{0}^{t} \sum _{p=1}^{q+1}b_p(\tau )\Vert E(\tau )\Vert ^p\,\mathrm {d}\tau , \end{aligned}$$
(A.22)
where
$$\begin{aligned} b_1(\tau )&=\tilde{C}(1+\tau )^{q(1-\frac{q}{5})} +C_q(1+\tau )^{\tilde{q}}\left( \Vert F(\tau )\Vert + C(1+\tau )^{(1-\frac{q}{5})}\right) \end{aligned}$$
(A.23)
$$\begin{aligned} b_p(\tau )&=C_q(1+\tau )^{\tilde{q}}\left( \Vert F(\tau )\Vert + C(1+\tau )^{(1-\frac{q}{5})} + 1\right) \end{aligned}$$
(A.24)
$$\begin{aligned} b_{q+1}(\tau )&=C_q(1+\tau )^{\tilde{q}} \end{aligned}$$
(A.25)
Then we make the observation that either the error is small and \(\Vert E(\tau )\Vert \ge \Vert E(\tau )\Vert ^p\) for \(p>1\) or the error is larger than one, in which case \(\Vert E(\tau )\Vert ^{q+1}\ge \Vert E(\tau )\Vert ^p\) for \(p\le q\). We let \(\nu =1\) or \(\nu =q+1\), sum up the coefficients and pick the highest power of \((1+\tau )\) to get
$$\begin{aligned} \Vert E_3(t)\Vert _\infty&\le \tilde{C}_q\Vert Q^{-1}\Vert \max (\Vert \tilde{\varPsi }_x\Vert ,\Vert B_x\Vert ) \int _{0}^{t}(1+\tau )^{\tilde{q}(1-\frac{q}{5})}\Vert E(\tau )\Vert ^\nu \,\mathrm {d}\tau , \end{aligned}$$
(A.26)
where
$$\begin{aligned} \tilde{C}_q=(q+1)\left( 2+\max _{0\le \tau \le t}\Vert F(\tau )\Vert \right) \max (\tilde{C},C_q,C_qC). \end{aligned}$$
(A.27)
Table 1 shows the different powers that are involved as a function of q. The final power in the estimate grows with q.
Table 1
Values of the different powers involved in the error estimates
q
\((1-q/5)\)
\(q(1-q/5)\)
\(\tilde{q}\)
\(\tilde{q}(1-q/5)\)
1
0.8
0.8
0.8
0.64
2
0.6
1.2
0.6
0.36
3
0.4
1.2
0.8
0.32
4
0.2
0.8
0.6
0.12
5
0
0
0
0
6
\(-\) 0.2
\(-\) 1.2
\(-\) 0.2
0.04
7
\(-\) 0.4
\(-\) 2.8
\(-\)0.4
0.16
8
\(-\) 0.6
\(-\) 4.8
\(-\) 0.6
0.36
9
\(-\) 0.8
\(-\) 7.2
\(-\) 0.8
0.64
10
\(-\) 1
\(-\) 10
\(-\) 1
1

Global Error Bound

By combining the error terms (A.16), (A.17) and (A.26), we get the following relation for the error due to the spatial discretization
$$\begin{aligned} \Vert E(t)\Vert _{\infty }&\le C_1\max _{0\le \tau \le t}\Vert \varepsilon _{xxxx}(\tau )\Vert _\infty +C_2(t)\max _{0\le \tau \le t}\Vert \varepsilon _{x}(\tau )\Vert _\infty \nonumber \\&\quad +C_3\int _{0}^{t}(1+\tau )^{\tilde{q}(1-\frac{q}{5})}\Vert E(\tau )\Vert ^\nu \,\mathrm {d}\tau , \end{aligned}$$
(A.28)
where
$$\begin{aligned} C_1&= |\alpha |\Vert Q^{-1}\Vert _{\infty }, \end{aligned}$$
(A.29)
$$\begin{aligned} C_2(t)&=\tilde{C}\Vert Q^{-1}\Vert _{\infty }\left( \frac{(1+t)^{q(1-\frac{q}{5})+1}-1}{q(1-\frac{q}{5})+1}\right) , \end{aligned}$$
(A.30)
$$\begin{aligned} C_3&=\tilde{C}_q\Vert Q^{-1}\Vert \max (\Vert \tilde{\varPsi }_x\Vert ,\Vert B_x\Vert ). \end{aligned}$$
(A.31)
To convert this into an error estimate, we need the following Gronwall inequality [8]:
Lemma 1
(Gronwall inequality) Let the functions E(t), a(t) and \(k(t)\ge 0\) be continuous functions defined for \(t\in [0,b]\). We assume that for \(t\in [0,b]\) we have the inequality
$$\begin{aligned} E(t)\le a(t)+\int _{0}^{t}k(\tau )E^n(\tau )\, \mathrm {d}\tau . \end{aligned}$$
(A.32)
Then for the case \(n=1\) it holds
$$\begin{aligned} E(t)\le a(t)+\int _{0}^{t}k(\tau )a(\tau ) e^{\int _{\tau }^{t}k(u)du}\, \mathrm {d}\tau . \end{aligned}$$
(A.33)
If the function a(t) is also non-decreasing, then
$$\begin{aligned} E(t)\le a(t) e^{\int _{0}^{t}k(\tau )\, \mathrm {d}\tau }. \end{aligned}$$
(A.34)
For the case \(n\ge 2\)
$$\begin{aligned} E(t)\le a(t)\left[ 1-(n-1)\int _{0}^{t}k(\tau )a^{n-1}(\tau )\, \mathrm {d}\tau \right] ^{\frac{1}{n-1}}, \quad t\le \beta _n, \end{aligned}$$
(A.35)
where \(\beta _n=\sup \{t:(n-1)\int _{0}^{t}k(\tau )a^{n-1}(\tau )\, \mathrm {d}\tau <1\}\).
In our case, it can easily be verified that the function
$$\begin{aligned} a(t)=C_1\max _{0\le \tau \le t}\Vert \varepsilon _{xxxx}(\tau )\Vert _\infty +C_2(t)\max _{0\le \tau \le t}\Vert \varepsilon _{x}(\tau )\Vert _\infty \end{aligned}$$
(A.36)
is non-decreasing in time. For the case of small errors, \(n=\nu =1\), and
$$\begin{aligned} \int _{0}^{t}k(\tau )\, \mathrm {d}\tau =C_3\int _{0}^{t}(1+\tau )^{\tilde{q}(1-\frac{q}{5})}\, \mathrm {d}\tau = C_3\frac{(1+t)^{\tilde{q}(1-\frac{q}{5})+1}-1}{\tilde{q}(1-\frac{q}{5})+1}, \end{aligned}$$
(A.37)
the Gronwall inequality leads to
$$\begin{aligned} \Vert E(t)\Vert _{\infty }\le \left[ C_{1}\max _{0\le \tau \le t}\Vert \varepsilon _{xxxx}(\tau )\Vert _\infty +C_2(t)\max _{0\le \tau \le t}\Vert \varepsilon _{x}(\tau )\Vert _\infty \right] e^{C_3\frac{(1+t)^{\tilde{q}(1-\frac{q}{5})+1}-1}{\tilde{q}(1-\frac{q}{5})+1}}. \end{aligned}$$
(A.38)
For the case of errors larger than one, we do not carry out the full derivation, but note that the limit on the time interval becomes less severe when a(t) is small enough, which is the case when the spatial resolution is high enough.
It remains to insert estimates for the derivatives of the RBF interpolation errors. These are typically measured in terms of the fill distance h, which in one space dimension with uniform nodes becomes \(h=\frac{1}{2}|x_{j+1}-x_j|\), and more generally for a domain \(\varOmega \) in d dimensions and a node set X is defined as
$$\begin{aligned} h=\sup _{x\in \varOmega } \min _{x_j\in X}\Vert x-x_j\Vert . \end{aligned}$$
(A.39)
Exponential estimates for inverse multiquadric interpolants are given in [30] provided that \(\varOmega \) is a bounded domain with Lipschitz boundary that satisfies an interior cone condition.
$$\begin{aligned} \Vert \varepsilon _\mathscr {L}\Vert _\infty \le c e^{-{\gamma /\sqrt{h}}}\Vert u\Vert _{\mathscr {N}(\varOmega )}, \end{aligned}$$
(A.40)
where the constant \(\gamma \) depends on the order of the differential operator \(\mathscr {L}\), the dimension d, and the shape parameter of the RBF \(\varepsilon ^{}\), and c is a positive constant. The norm in the right hand side denoted by \(\Vert \cdot \Vert _{\mathscr {N}}(\cdot )\) is the native space norm. For more details about this norm see [10, 30].
Now, by inserting the interpolation error estimate (A.40) into the global error estimate (A.38) we get
$$\begin{aligned} \Vert E(t)\Vert _{\infty }\le C(t) e^{-\frac{\gamma }{\sqrt{h}}}e^{C_3\frac{(1+t)^{\tilde{q}(1-\frac{q}{5})+1}-1}{\tilde{q}(1-\frac{q}{5})+1}} \max _{0\le \tau \le t}\Vert u(\tau )\Vert _{\mathscr {N}(\varOmega )}. \end{aligned}$$
(A.41)
where \(C(t)=c(C_1+C_2(t))\). The final estimate tells us that the spatial RBF discretization does allow for exponential convergence in h, but we should expect the error to grow in time. For any finite interval \(t\in [0,T]\) the growth in time is however bounded.

Numerical Investigation of the Matrix Norms in the Estimates

There are three different matrix norms that appear in equations (A.29)–(A.31) of the error estimates. We do not have theoretical bounds for these, and therefore we perform a numerical investigation of their behaviors. Based on previous experience of RBF approximation, we have selected the following representation of the method parameters, the fill distance h, the relative shape parameter value \(\varepsilon ^{}h\), and the (half) domain size L. In Fig. 13, the norms are plotted as a function of h for different combinations of the parameters. The chosen values are such that they are reasonable for approximation. By choosing extreme values, different behaviors can be observed. We see that the norm \(\Vert Q^{-1}\Vert \) does not change much with h or \(\varepsilon ^{}h\), but slightly with L. Both of the norms \(\Vert B_x\Vert \) and \(\Vert \tilde{\varPsi }_x\Vert \) grow as \(h^{-1}\), and we can observe a little bit of instability in the value for small \(\varepsilon ^{}\). This rate is what would be expected for a first derivative such as is represented by these matrices. Changing L has a very small effect also in this case.
Literature
7.
go back to reference Cipra, B.: What’s Happening in the Mathematical Sciences, vol. 2. American Mathematical Society, East Providence, Rhode Island (1994)MATH Cipra, B.: What’s Happening in the Mathematical Sciences, vol. 2. American Mathematical Society, East Providence, Rhode Island (1994)MATH
8.
go back to reference Dragomir, S.S.: Some Gronwall Type Inequalities and Applications. Nova Science Pub Incorporated, Hauppauge, NY (2003)MATH Dragomir, S.S.: Some Gronwall Type Inequalities and Applications. Nova Science Pub Incorporated, Hauppauge, NY (2003)MATH
10.
go back to reference Fasshauer, G.E.: Meshfree approximation methods with MATLAB, Interdisciplinary Mathematical Sciences, vol. 6. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2007). With 1 CD-ROM (Windows, Macintosh and UNIX) Fasshauer, G.E.: Meshfree approximation methods with MATLAB, Interdisciplinary Mathematical Sciences, vol. 6. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2007). With 1 CD-ROM (Windows, Macintosh and UNIX)
21.
go back to reference Kim, Y.D., Lee, H.Y.: The convergence of finite element Galerkin solution for the Roseneau equation. Korean J. Comput. Appl. Math. 5(1), 171–180 (1998)MathSciNetMATH Kim, Y.D., Lee, H.Y.: The convergence of finite element Galerkin solution for the Roseneau equation. Korean J. Comput. Appl. Math. 5(1), 171–180 (1998)MathSciNetMATH
23.
go back to reference Larsson, E., Shcherbakov, V., Heryudono, A.: A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. (2017) (to appear) Larsson, E., Shcherbakov, V., Heryudono, A.: A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. (2017) (to appear)
27.
go back to reference Park, M.A.: Model equations in fluid dynamics. Ph.D. thesis, Tulane University (1990) Park, M.A.: Model equations in fluid dynamics. Ph.D. thesis, Tulane University (1990)
28.
go back to reference Park, M.A.: Pointwise decay estimates of solutions of the generalized Rosenau equation. J. Korean Math. Soc. 29(2), 261–280 (1992)MathSciNetMATH Park, M.A.: Pointwise decay estimates of solutions of the generalized Rosenau equation. J. Korean Math. Soc. 29(2), 261–280 (1992)MathSciNetMATH
29.
go back to reference Petzold, L.R.: Numerical solution of differential-algebraic equations. In: Theory and numerics of ordinary and partial differential equations (Leicester, 1994), Adv. Numer. Anal., IV, pp. 123–142. Oxford University Press, New York (1995). https://doi.org/10.1137/1.9781611971224 Petzold, L.R.: Numerical solution of differential-algebraic equations. In: Theory and numerics of ordinary and partial differential equations (Leicester, 1994), Adv. Numer. Anal., IV, pp. 123–142. Oxford University Press, New York (1995). https://​doi.​org/​10.​1137/​1.​9781611971224
31.
go back to reference Rosenau, P.: Dynamics of dense discrete systems: high order effects. Prog. Theor. Phys. 79(5), 1028–1042 (1988)CrossRef Rosenau, P.: Dynamics of dense discrete systems: high order effects. Prog. Theor. Phys. 79(5), 1028–1042 (1988)CrossRef
Metadata
Title
Radial Basis Function Methods for the Rosenau Equation and Other Higher Order PDEs
Authors
Ali Safdari-Vaighani
Elisabeth Larsson
Alfa Heryudono
Publication date
15-11-2017
Publisher
Springer US
Published in
Journal of Scientific Computing / Issue 3/2018
Print ISSN: 0885-7474
Electronic ISSN: 1573-7691
DOI
https://doi.org/10.1007/s10915-017-0598-1

Other articles of this Issue 3/2018

Journal of Scientific Computing 3/2018 Go to the issue

Premium Partner