In this paper, a stable collocation method for solving the nonlinear fractional delay differential equations is proposed by constructing a new set of multiscale orthonormal bases of \(W^{1}_{2,0}\). Error estimations of approximate solutions are given and the highest convergence order can reach four in the sense of the norm of \(W_{2,0}^{1}\). To overcome the nonlinear condition, we make use of Newton’s method to transform the nonlinear equation into a sequence of linear equations. For the linear equations, a rigorous theory is given for obtaining their ε-approximate solutions by solving a system of equations or searching the minimum value. Stability analysis is also obtained. Some examples are discussed to illustrate the efficiency of the proposed method.
Hinweise
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Nowadays, fractional differential equations have been a hot topic in the field of differential equations for their widespread applications in many science fields [1, 2]. Among them, fractional delay differential equations begin to arouse attentions of many researchers. These equations have also many applications in various areas such as control theory, biology, and economy [3, 4]. Since some models have a great deal to do with past condition, the insertion of a time delay makes these models more realistic. Therefore, the development of theory and numerical algorithms about fractional delay differential equations is of importance. For example, Hu and Zhao [5] give the condition of asymptotical stability of nonlinear fractional system with distributed time delay by utilizing the function monotonous properties and the stability theorem of the fractional linear system. Pimenov et al. [6] use a BDF difference scheme based on approximations of Clenshaw and Curtis type to get the numerical solution of the following equation:
For (1), Moghaddam and Mostaghim [4] use the fractional finite difference method to obtain its numerical solutions. Saeed et al. [7] draw on the steps method and Chebyshev wavelet method to solve the nonlinear fractional delay differential equation:
and get their approximate solutions. However, to our best knowledge, there are few articles about the study of fractional delay differential equations especially with regard to nonlinear fractional delay differential equations.
Newton’s iterative method is a very powerful tool to solve the nonlinear problems. Many researchers have been studying and generalizing Newton’s iterative method and they use it to solve nonlinear problems. Deuflhard [8] in his monograph constructs adaptive Newton algorithms for some specific nonlinear problems. Krasnosel’skii et al. [9] in their monograph study the Newton-Kantorovich’s method, give a modified Newton-Kantorovich’s method and solve the problem of the choice of initial approximations. Xu et al. [10] use quasi-Newton’s method to linearize a nonlinear operator equation, and so on. Theoretical analysis shows that the convergence order of Newton’s iterative formula is order 2.
Anzeige
Motivated greatly by the abovementioned excellent works, in this paper, we deal with the nonlinear fractional delay differential (2) in the condition p = 1, that is the following equation:
where f has continuous second order partial derivative, g(x) ∈ C[0, 1] and y0(x) ∈ C2 [−τ, 0] which are known functions. τ > 0 is a constant delay. The fractional derivative is in the sense of Caputo and y(x) ∈ C1 [−τ,1] is the unknown function.
In this article, we develop a stable and effective collocation method to solve (3). The collocation method is one of the most efficient methods for obtaining accurate numerical solutions of differential equations including variable coefficients and nonlinear differential equations [11‐14]. The stability of collocation methods has always been an important topic. At present, the definition of its stability is that when there are many collocation nodes, the resulting equations are not ill conditioned and the results are still valid. For a stable collocation method, we can improve the accuracy by increasing the number of approximation terms. Accordingly, it is particularly important to establish a high-precision and stable collocation method.
The choice of bases of a space is important for a collocation method. Approximate solutions with different accuracy can be derived by using different bases for the same equation. For obtaining higher accuracy solution of the nonlinear fractional differential equation, we construct a new set of multiscale orthonormal bases of \(W^{1}_{2,0}\), give error estimations of approximate solutions, and prove in Section 3 that the highest convergence order can reach four in the sense of \(W_{2,0}^{1}\).
Noting that the problem of initial value selection of the Newton’s iterative method has been well solved in [9], so in this paper we transform (3) into a list of linear equations by using Newton’s iterative method. Then a new stable collocation method is proposed to solve these equations. Compared with [15], the final numerical experiments show that our method is better in dealing with this kind of equations.
Anzeige
The remainder of this paper is organized as follows: In Section 2, some relevant definitions and properties of the fractional calculus and the space \({W_{2}^{1}}\) and \(W_{2,0}^{1}\) are introduced. In Section 3, we construct a set of multiscale orthonormal bases of \(W^{1}_{2,0}\) and give an error estimation of the approximate solution. In Section 4, we construct the ε −approximate solution method and apply this method to solve the linear fractional delay differential equation. In Section 5, we analyze the stability of the ε −approximate solution method. In Section 6, we make use of Newton’s method to transform the nonlinear equation into a sequence of linear equations. In Section 7, we give the algorithm implementation of Newton’s iterative formula for solving the nonlinear fractional delay differential equation. In Section 8, three numerical examples are given to clarify the effectiveness of the algorithm. In the last section, the conclusions are prensented.
2 Preliminaries and notations
In this section, some preliminary results about the fractional calculus operators and reproducing kernel spaces are recalled [16‐18].
Definition 2.1
The Riemann-Liouville (R-L) fractional integral operator \(J_{0}^{\alpha }\) is given by
It is easy to see that \({W^{1}_{2}}\subset C[0, 1]\). Similar to [17], we can prove that \({W^{1}_{2}}\) is not only a Hilbert space, but also a reproducing kernel space with the reproducing kernel
The inner product space \(W^{1}_{2,0}\) is defined as \( W^{1}_{2,0}\triangleq W^{1}_{2,0}[0, 1]=\{u(x)\in {W^{1}_{2}}|u(0)=0 \}\) with the inner product
The following definition will be used in Section 6.
Definition 2.5
The inner product space \(W^{\alpha }_{2}\) is defined as \( W^{\alpha }_{2}\triangleq W^{\alpha }_{2}[0, 1]=\{J_{0}^{\alpha }u(s)|u(s)\in W^{1}_{2,0}[0, 1]\}\). And the inner product and the norm of \(W^{\alpha }_{2}\) are given by
Similar to [19], one can prove that \(W^{\alpha }_{2}\) is not only a Hilbert space, but also a reproducing kernel space.
3 Construction of the multiscale orthonormal bases of \(W^{1}_{2,0}\) and error estimations
In this section, we construct the multiscale orthonormal bases of \(W^{1}_{2,0}\) by the famous Legendre multiwavelets, and give error estimations of approximate solutions.
Define the cubic Legendre scaling functions in the interval [0, 1] [20],
where \(\delta _{ij}=\left \{ \begin {array}{ll} 1, & i=j\\ 0, & i\neq j \end {array} \right .. \) So \(\{\psi _{i}(x)\}^{\infty }_{i=1}\) is orthonormal in \(W^{1}_{2,0}\).
Next, we prove that \(\{\psi _{i}(x)\}^{\infty }_{i=1}\) is complete in \(W^{1}_{2,0}\). Let \(\xi (x)\in W^{1}_{2,0}\) and (ξ(x), ψi(x))1 = 0, i = 1,2,⋯ . That is,
By Lemma 3.1, \(\{\varphi _{i}(x)\}^{\infty }_{i=1}\) is a set of multiscale orthonormal bases of L2 [0, 1], and we obtain \(\xi ^{\prime }(x)=0\). Noting that \(\xi (x)\in W^{1}_{2,0}\), we have ξ(0) = 0. So \(\xi (x)=\xi (0)+{{{\int \limits }_{0}^{x}}\xi ^{\prime }(t)\mathrm {d}t}=0\). Thus, \(\{\psi _{i}(x)\}^{\infty }_{i=1}\) is complete in \(W^{1}_{2,0}\) and \(\{\psi _{i}(x)\}^{\infty }_{i=1}\) is a set of orthonormal bases of \(W^{1}_{2,0}\). □
where \(a_{i}=(y(x),{J_{0}^{1}}\eta ^{i}(x))_{1}\), \(c_{ik}^{l}=(y(x),{J_{0}^{1}}\phi _{ik}^{l}(x))_{1}\).
Lemma 3.2
Let\(y(x)\in W_{2,0}^{1},\mid y^{(j)}(x)\mid \leq M,~{\forall ~x}\in [0, 1], {\text {for some } }j\in \{2,3,4,5\}\). Then\(|c_{ik}^{l}|\leq 2^{-(i-1)(j-1/2)}AM,~l=0,1,2,3\)where A is a constant.
Proof
Without loss of generality, we first consider \(|c_{ik}^{0}|\).
and analyze the convergence order of the ε-approximate solutions. \(a(x),b(x),c(x),d(x),h(x)\in {W^{1}_{2}}[0, 1]\), y0(x) ∈ C2 [−τ,0] are known functions and y(x) is an unknown function. τ > 0 is a constant delay. Similar to Theorem 2.1, we can prove \(D^{\alpha }_{C} y(x)\in C[0, 1]\). Here, we further assume \(D^{\alpha }_{C}y(x)\in {W^{1}_{2}}\).
by using the integration by parts. Hence, we have \(J_{0}^{\alpha }\omega (x)={{\int \limits }_{0}^{x}}(J_{0}^{\alpha }\omega )_{t}^{\prime }{\mathrm {d}}t\) and \(J_{0}^{\alpha }\omega (x)\in AC[0, 1].\)
holds. So \((J_{0}^{\alpha }\omega (x))_{x}^{\prime }\in L^{2}[0, 1]\). Noting that \(J_{0}^{\alpha }\omega (0)=0\), we have \(J_{0}^{\alpha }\omega (x)\in W^{1}_{2,0}\) and
The operator Ldefined by (7) is a bounded linear operator from\(W^{1}_{2,0}\)to\(W^{1}_{2,0}\).
Proof
Obviously, L is a linear operator. Noting that \(a(x),b(x),c(x),d(x)\in {W^{1}_{2}}[0, 1]\), thus we know a(x), b(x), c(x), d(x) ∈ AC[0, 1]. Denote \(M_{1}=\max \limits \{\|a(x)\|_{1},\|b(x)\|_{1},\|c(x)\|_{1},\|d(x)\|_{1}\}.\)
Let \(\omega (x)\in W^{1}_{2,0}[0, 1]\), according to Lemma 4.2 and 4.3, and we have \(J_{0}^{\alpha }\omega \in W^{1}_{2,0}\), and there exists an \(\tilde {M}_{\alpha }\) such that
which implies L is a bounded linear operator from \(W^{1}_{2,0}\) to \(W^{1}_{2,0}\). □
Theorem 4.2
Suppose there exists a unique solution of the equation defined by (8), then Lis a bijection from\(W^{1}_{2,0}\)to\(L(W^{1}_{2,0})\)and\(L^{-1}:L(W^{1}_{2,0})\rightarrow W^{1}_{2,0}\)does not only exist but is also bounded.
Remark 4.1
It is easy to see that L is a bijection from \(W^{1}_{2,0}\) to \(L(W^{1}_{2,0})\). For the latter part of the conclusion, we can define an operator \(J:W^{1}_{2,0}\rightarrow W^{1}_{2,0},~J(\omega (x))=J^{\alpha }_{0}\omega (x),~\omega (x)\in W^{1}_{2,0}\). One can prove that the operator J is compact. Since the sum of finite compact operators is still compact and \(\{(I-J)\omega (x)|\omega (x)\in W^{1}_{2,0}\}\) is closed, one can obtain that \(L^{-1}:L(W^{1}_{2,0})\rightarrow W^{1}_{2,0}\) is bounded by the Inverse Operator Theorem in the Banach spaces.
Definition 4.1
y(x) is called an ε −approximate solution of (8) if ∥ L(y) − g ∥1 < ε for any given ε > 0.
Theorem 4.3
For anyε > 0, there exists a positive integer Nsuch that for every fixed n ≥ N, \(\omega ^{*}_{n}(x)={\sum }^{n}_{i=0}c_{i}^{*}\psi _{i}(x)\)is anε −approximate solution of (8), where\(\{c_{i}^{*}\}_{i=0}^{n}\)satisfies
Suppose ω(x) is the exact solution of (8). By Theorem 3.1, there exists a positive integer N such that for any n > N, there exists \(\omega _{n}(x)={\sum }_{i=0}^{n}c_{i}\psi _{i}(x)\) such that
We derive \({\sum }_{i=0}^{n}l_{i}L(\psi _{i}(x))\equiv 0\), or \(L({\sum }_{i=0}^{n}l_{i}\psi _{i}(x))\equiv 0.\) Since L is injective, thus \({\sum }_{i=0}^{n}l_{i}\psi _{i}(x)\equiv 0.\) Noting that \(\{\psi _{i}(x)\}_{i=0}^{n}\) are linearly independent, we obtain li = 0, i = 0,1, ⋯ , n. Therefore, f0(x), f1(x), ⋯ , fn(x) are linearly independent, G is nonsingular, and the system of normal (13) has a unique solution. □
Remark 4.2
The unique solution of normal (13) is denoted as \((c_{0}^{*},c_{1}^{*},\cdots ,c_{n}^{*})\). Similar to [22], we can prove \(S(c_{1},c_{2},\cdots ,c_{n})\geq S(c_{0}^{*},c_{1}^{*},\cdots ,c_{n}^{*}).\) Thus, (10) has a unique solution determined by (13). The desired approximate solution y∗(x) of (5) can be obtained by (9).
Theorem 4.5
Assume that\(\omega ^{*}_{n}({\kern -.4pt}x{\kern -.4pt})\! =\! {\sum }_{i=0}^{3}{\kern -.4pt}a_{i}{\kern -.4pt}{J_{0}^{1}}\eta ^{i}\!({\kern -.4pt}x{\kern -.4pt}){\kern -.4pt}+\!{\sum }_{i=1}^{n}\!{\sum }_{l=0}^{3}\!{\sum }_{k=0}^{2^{i-1}-1}\!c_{ik}^{l}{\kern -.5pt}{J_{0}^{1}}{\kern -.5pt}\phi _{ik}^{l}{\kern -.5pt}({\kern -.5pt}x{\kern -.5pt})\)obtained by Theorem 6 is an ε-approximate solution of Eq.(8), ω(x) is the exact solution of Eq.(8) and |ω(j)(x)| ≤ M, ∀ x ∈ [0, 1], for some j ∈ {2, 3, 4, 5}. Then\(\parallel \omega (x)-\omega ^{*}_{n}(x)\parallel _{1}\leq C\cdot 2^{-(j-1)n}\), where Cis a constant.
Proof
According to Theorem 3.2, one has \(\parallel \omega (x)-\omega _{n}(x)\parallel _{1}\leq \bar {C}\cdot 2^{-(j-1)n}\) where \(\bar {C}\) is a constant. Thus, one can derive that
In this section, we consider the stability of our proposed method.
Assume λ is a eigenvalue of the matrix G which is defined in the proof of Theorem 4.4, that is, there exists an \(X=[x_{0},x_{1},\cdots ,x_{n}]^{T}\in \mathbb {R}^{n+1},~X\neq 0,\) such that GX = λX. Hence,
In this section, we solve (3) when f is nonlinear. In order to obtain high-accuracy approximate solutions, we employ F-derivative and Newton’s iterative formula.
Let u be the exact solution of the above equation and assume that \(D^{\alpha }_{C}u(x)=\omega (x)\). At the beginning of Section 4, we have proved that \(u(x)=J_{0}^{\alpha } \omega (x)\) and \(\omega \in W_{2,0}^{1}\). So \(u(x)\in W_{2}^{\alpha }\).
Define an operator \(F: W_{2}^{\alpha }[0, 1]\rightarrow {W_{2}^{1}}[0, 1]\),
Substituting \(u(x)+h(x), u^{\prime }(x)+h^{\prime }(x), u(x-\tau )+h(x-\tau ), u^{\prime }(x-\tau )+h^{\prime }(x-\tau )\) for y, z, p, q and \(u(x), u^{\prime }(x), u(x-\tau ), u^{\prime }(x-\tau )\) for y0, z0, p0, q0 in the above equation, we get
Since \(F^{\prime }(u)h=F_{1}^{\prime }(u)h+F_{2}^{\prime }(u)h\), then the conclusion holds. □
Remark 6.1
An inequality used here is \(\|h^{2}\|_{1}\leq 2\sqrt {2}\|h\|_{1}^{2}\). In fact, according to the definition of the norm ∥⋅∥1 of \( {W^{1}_{2}}\) and Lemma 2.1, we can obtain
Compute the F(uk), \(F^{\prime }(u_{k})(u_{k+1}-u_{k})\) according to (23) and (24).
(3)
Compute R(x) by (25), solve the equations (26), and obtain the {dk, i}.
(4)
Compute ck+ 1, i = ck, i + dk, i (i = 0, 1, 2, ⋯ , n).
(5)
If ∥uk+ 1 − uk∥C ≥ ε, go to (1); or else go to Step 5.
Step 5
Output the last approximate solution \(u_{k+1}={\sum }_{i=0}^{n}c_{k+1,i}J_{0}^{\alpha }\psi _{i}(x)\).
Using the software Mathematica, we can get the approximate solution uk+ 1(x). We don’t need too much iterations to reach desired approximate solutions because of high efficiency of Newton’s iterative method.
Remark 7.1
The algorithm can be extended to the case of m − 1 < α ≤ m (m ∈N+) for the fractional delay differential (3).
8 Numerical examples
In this section, the algorithm presented above is applied to solve linear and nonlinear fractional delay differential equations. Three examples are considered to illustrate the efficiency of the suggested algorithm. The relative error here is defined as \(R_{n}=\frac {u-u_{n}}{\|u\|_{C}}\) with the exact solution u and an approximate solution un obtained by Theorem 4.3. Basing on the numerical results, we adopt the formula \(\log _{2}[R_{n}/R_{2n-1}]\) to estimate the convergence order.
Example 8.1
Consider the linear fractional differential equation with delay
where g(x) is chosen such that the exact solution is x2.3 or x4.1. When the exact solution is x2.3, we solve the equation with n = 65, 257, 513, respectively, and obtain the corresponding approximate solutions. The relative errors between the exact solution and the approximate solutions are displayed in Fig. 1. When the exact solution of this example is chosen to be x4.1 which is of higher smoothness, the relative errors are displayed in Fig. 2. One can see that the relative errors are decreased with the increase of n. The results show that the error is decreased fast when the smoothness of solution is increased. In Table 1, we provide some numerical results illustrating the fact that the convergence order can be improved when smoothness of the solution is improved. Even if the solution of the equation is less smooth, the computing error is still acceptable for the engineering. Therefore, the algorithm is stable, reliable, and adaptive.
Table 1
The comparison of infinity-norm of the relative errors at discrete points and estimates of the convergence order on the interval [0, 1] between the exact solutions x2.3 and x4.1 for Example 8.1
n
x2.3
ECO
x4.1
ECO
17
0.0001744
1.36584 × 10−6
33
0.0000513
1.7648
1.18324 × 10−7
3.52898
65
0.0000142
1.85935
9.00737 × 10− 9
3.71549
129
3.9095 × 10−6
1.85585
6.77928 × 10− 10
3.73190
257
1.0975 × 10−6
1.83273
5.27700 × 10− 11
3.68334
513
3.1161 × 10−7
1.81643
4.21943 × 10− 12
3.66446
×
×
Example 8.2
Let us consider the nonlinear fractional differential equation with delay
where g(x) is chosen such that the exact solution of this example is x2.3 or x4.3. The relative errors between the exact solution and the approximate solutions with different times of iteration are displayed in Figs. 3 and 4 with n = 65, 129, 257 in both cases, respectively. The semi-log plots in error are displayed in Figs. 3 and 4. In Table 2, we compare the infinity-norm of relative errors at the discrete points and estimates of convergence order for both solution cases. The results show that our method is still valid, stable, and adaptive when solving nonlinear problems.
Table 2
The comparison of infinity-norm of the relative errors at discrete points and estimates of the convergence order on the interval [0, 1] between the exact solutions x2.3 and x4.3 for Example 8.2
n
x2.3
ECO
x4.3
ECO
9
0.001229870
0.000017226
17
0.000355909
1.78893
1.23037 × 10−6
3.80745
33
0.000105201
1.75837
9.30212 × 10− 8
3.72539
65
0.000029364
1.84102
6.26881 × 10− 9
3.89129
129
8.15525 × 10−6
1.84825
4.12792 × 10− 10
3.92471
257
2.29599 × 10−6
1.82861
2.79184 × 10− 11
3.88613
×
×
Example 8.3
Consider the linear fractional differential equation with delay [15]
The exact solution of this example is x2. The relative errors between the exact solution and the approximate solutions are displayed in Fig. 5 with n = 65, 257, 513, 1025, respectively. One can see that the relative errors are decreased with the increase of n . We compare the infinity-norm of absolute errors at the discrete points with that from Ref. [15] in Table 3. The numerical results show that our method has higher accuracy in this example and the computing errors may be acceptable for engineering.
Table 3
The comparison of infinity-norm of the absolute errors at discrete points with the backward difference method on the interval [0, 1] for Example 8.3
Step h
Difference method
n
Present method
1/10
0.0491843
9
0.0002360
1/20
0.0276172
17
0.0000604
1/40
0.0146507
33
0.0000150
1/80
0.00756493
65
3.80892 × 10−6
1/160
0.00385284
129
4.4984 × 10−7
1/320
0.00194804
257
2.4383 × 10−7
1/640
0.000980855
513
6.1313 × 10−8
×
9 Conclusion
In this paper, we construct a new stable collocation method for solving a kind of nonlinear fractional delay differential equations. More suitable multiscale orthonormal bases of \(W^{1}_{2,0}\) are constructed, error estimations of approximate solutions are given, and the highest convergence order can reach four in the sense of the norm of \({W_{2}^{1}}\). Newton’s iterative formula is used to linearize the nonlinear equation, and for the obtained linear equations we develop an ε-approximate solution method based on multiscale orthonormal bases to solve them. A concrete algorithm implementation is given. Numerical examples show that compared with [15], the presented method is more accurate in dealing with this kind of equations.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.