Abstract
This paper investigates the numerical solution of nonlinear Fredholm-Volterra integro-differential equations using reproducing kernel Hilbert space method. The solution is represented in the form of series in the reproducing kernel space. In the mean time, the n-term approximate solution is obtained and it is proved to converge to the exact solution . Furthermore, the proposed method has an advantage that it is possible to pick any point in the interval of integration and as well the approximate solution and its derivative will be applicable. Numerical examples are included to demonstrate the accuracy and applicability of the presented technique. The results reveal that the method is very effective and simple.
1. Introduction
In recent years, there has been a growing interest in the integrodifferential equations (IDEs) which are a combination of differential and Fredholm-Volterra integral equations. IDEs are often involved in the mathematical formulation of physical phenomena. IDEs can be encountered in various fields of science such as physics, biology, and engineering. These kinds of equations can also be found in numerous applications, such as biomechanics, electromagnetic, elasticity, electrodynamics, fluid dynamics, heat and mass transfer, and oscillation theory [1–4].
The purpose of this paper is to extend the application of the reproducing kernel Hilbert space (RKHS) method to solve the nonlinear Fredholm-Volterra IDE which is as follows: subject to the initial condition where are real finite constants, is an unknown function to be determined, , , are continuous functions on , , , are continuous terms in as , , , , and are depending on the problem discussed, and are reproducing kernel spaces.
In general, nonlinear Fredholm-Volterra IDEs do not always have solutions which we can obtain using analytical methods. In fact, many of real physical phenomena encountered are almost impossible to solve by this technique. Due to this, some authors have proposed numerical methods to approximate the solutions of nonlinear Fredholm-Volterra IDEs. To mention a few, in [5] the authors have discussed the Taylor polynomial method for solving IDEs (1.1) and (1.2) when , , and , where . The triangular functions method has been applied to solve the same equations when , , and , where , as described in [6]. Furthermore, the operational matrix with block-pulse functions method is carried out in [7] for the aforementioned IDEs in the case , , and , where . Recently, the Hybrid Legendre polynomials and Block-Pulse functions approach for solving IDEs (1.1) and (1.2) when , , and , where are proposed in [8]. The numerical solvability of Fredholm and Volterra IDEs and other related equations can be found in [9–11] and references therein. However, none of previous studies propose a methodical way to solve these equations. Moreover, previous studies require more effort to achieve the results, they are not accurate, and usually they are developed for special types of IDEs (1.1) and (1.2).
Reproducing kernel theory has important application in numerical analysis, differential equations, integral equations, probability and statistics, and so on [12–14]. Recently, using the RKHS method, the authors in [15–29] have discussed singular linear two-point boundary value problems, singular nonlinear two-point periodic boundary value problems, nonlinear system of boundary value problems, initial value problems, singular integral equations, nonlinear partial differential equations, operator equations, and fourth-order IDEs.
The outline of the paper is as follows: several reproducing kernel spaces are described in Section 2. In Section 3, a linear operator, a complete normal orthogonal system, and some essential results are introduced. Also, a method for the existence of solutions for (1.1) and (1.2) based on reproducing kernel space is described. In Section 4, we give an iterative method to solve (1.1) and (1.2) numerically in RKHS. Various numerical examples are presented in Section 5. This paper ends in Section 6 with some concluding remarks.
2. Several Reproducing Kernel Spaces
In this section, several reproducing kernels needed are constructed in order to solve (1.1) and (1.2) using RKHS method. Before the construction, we utilize the reproducing kernel concept. Throughout this paper is the set of complex numbers, , , and the superscript in denotes the -th derivative of .
Definition 2.1 (see [18]). Let be a nonempty abstract set. A function is a reproducing kernel of the Hilbert space if(1)for each , ,(2)for each and , .
The last condition is called “the reproducing property”: the value of the function at the point is reproducing by the inner product of with . A Hilbert space which possesses a reproducing kernel is called a RKHS [18].
Next, we first construct the space in which every function satisfies the initial condition (1.2) and then utilize the space .
Definition 2.2 (see [30]). are absolutely continuous on , , and . The inner product and the norm in are defined, respectively, by and , where .
Definition 2.3 (see [23]). is absolutely continuous on and . The inner product and the norm in are defined, respectively, by and , where .
In [23], the authors have proved that the space is a complete reproducing kernel space and its reproducing kernel function is given by From the definition of the reproducing kernel spaces and , we get .
The Hilbert space is called a reproducing kernel if for each fixed and any , there exist (simply ) and such that . The next theorem formulates the reproducing kernel function .
Theorem 2.4. The Hilbert space is a reproducing kernel and its reproducing kernel function can be written as where and are unknown coefficients of .
Proof. Through several integrations by parts for (2.1), we obtain . Since , it follows that . Also, since , one obtains . Thus, if , , and , then . Now, for each , if also satisfies , where is the dirac-delta function, then . Obviously, is the reproducing kernel function of the space .
Next, we give the expression of the reproducing kernel function . The characteristic equation of is , and their characteristic values are with multiple roots. So, let the kernel be as defined in (2.3).
On the other hand, let satisfy , . Integrating from to with respect to and letting , we have the jump degree of at given by . Through the last descriptions the unknown coefficients of (2.3) can be obtained. This completes the proof.
By using Mathematica software package, the representation of the reproducing kernel function is provided by
The following corollary summarizes some important properties of the reproducing kernel function .
Corollary 2.5. The reproducing kernel function is symmetric, unique, and for any fixed .
Proof. By the reproducing property, we have for each and . Now, let and be all the reproducing kernels of the space ; then . Finally, we note that .
3. Introduction to a Linear Operator and a Normal Orthogonal System in
In this section, we construct an orthogonal function system of . Also, representation of the solution of (1.1) and (1.2) is given in the reproducing kernel space .
To do this, we define a differential operator such that . After homogenization of the initial condition (1.2), IDEs (1.1) and (1.2) can be converted into the equivalent form as follows: such that , , and , where and for and , . It is easy to show that is a bounded linear operator from to .
Now, we construct an orthogonal function system of . Let and , where is dense on and is the adjoint operator of . From the properties of the reproducing kernel , we have for every . In terms of the properties of , one obtains , .
It is easy to see that . Thus, can be expressed in the form , where indicates that the operator applies to the function of .
Theorem 3.1. For (3.1), if is dense on , then is the complete function system of the space .
Proof. Clearly, . For each fixed , let , , which means that . Note that is dense on ; therefore, . It follows that from the existence of . So, the proof of the theorem is complete.
The orthonormal function system of the space can be derived from Gram-Schmidt orthogonalization process of as follows: where are orthogonalization coefficients given as , , and for in which , , and is the orthonormal system in the space .
Theorem 3.2. For each , the series is convergent in the norm of . On the other hand, if is dense on and is the exact solution of (3.1), then
Proof. Since , is the Fourier series expansion about normal orthogonal system , and is the Hilbert space, then the series is convergent in the sense of . On the other hand, using (3.2), we have But since is the exact solution of (3.1), then and So, the proof of the theorem is complete.
Note that we denote to the approximate solution of by
Theorem 3.3. If , then there exists such that , , where .
Proof. For any , we have ,. By the expression of , it follows that , . Thus, , . Hence, , , where . The proof is complete.
Corollary 3.4. The approximate solution and its derivative are uniformly convergent.
Proof. By Theorems 3.2 and 3.3, for any , we get
On the other hand,
Hence, , where and are positive constants. Hence, if as , the approximate solutions and converge uniformly to the exact solution and its derivative, respectively.
4. Iterative Method and Convergence Theorem
In this section, an iterative method of obtaining the solution of (3.1) is presented in the reproducing kernel space .
First of all, we will mention the following remark in order to solve (1.1) and (1.2) numerically. If (1.1) is linear, then the exact and approximate solutions can be obtained directly from (3.5) and (3.6), respectively. On the other hand, if (1.1) is nonlinear, then the exact and approximate solutions can be obtained using the following iterative method.
According to (3.5), the representation of the solution of (1.1) can be denoted by where . In fact, , , in (4.1) are unknown, and we will approximate using known . For a numerical computation, we define initial function and the -term approximation to by where the coefficients are given as
We mention here the following remark: in the iterative process of (4.2), we can guarantee that the approximation satisfies the initial condition (1.2).
Now, the approximate solution can be obtained by taking finitely many terms in the series representation of and
Next, we will prove that in the iterative formula (4.2) is convergent to the exact solution of (1.1).
Lemma 4.1. If , as and is continuous in with respect to , for and , then as , one has .
Proof. Firstly, we will prove that in the sense of . Since
by reproducing kernel property of , we have = and . Thus, = . From the symmetry of , it follows that as . Hence, as soon as .
On the other hand, by Corollary 3.4, for any , it holds that . Therefore, in the sense of as and .
Thus, by means of the continuation of , , and , it is obtained that , , and as . This shows that and as . Hence, the continuity of gives the result.
Lemma 4.2. in (4.2) is monotonically increasing in the sense of the norm of .
Proof. By Theorem 3.1, is the complete orthonormal system in the space . Hence, we have . Therefore, is monotonically increasing.
Lemma 4.3. One has .
Proof. The proof will be obtained by mathematical induction as follows: if , then = = = = . Thus,
Multiplying both sides of (4.6) by , summing for from to , and using the orthogonality of yield that
Now, if , then . On the other hand, if , then . Thus, . It is easy to see that by using mathematical induction.
Lemma 4.4. One has .
Proof. It is clear that on taking limits in (4.2) . Therefore, , where is an orthogonal projector from to Span. Thus, = = = = = = .
Theorem 4.5. Suppose that is bounded in (4.2). If is dense on , then the -term approximate solution in the iterative formula (4.2) converges to the exact solution of (3.1) in the space and , where is given by (4.3).
Proof. First of all, we will prove the convergence of . From (4.2), we infer that . The orthogonality of yields that
From Lemma 4.2, the sequence is monotonically increasing. Due to the condition that is bounded, is convergent as . Then, there exists a constant such that . This implies that , .
If , using , then one gets
Furthermore, . Consequently, as . Considering the completeness of the space , there exists a such that as in the sense of .
Secondly, we will prove that is the solution of (3.1). Since is dense on , for any , there exists subsequence such that as . From Lemmas 4.3 and 4.4, it is easy to see that . Hence, letting , by Lemma 4.1 and the continuity of , we have . That is, is the solution of (3.1).
Since , clearly, satisfies the initial condition (1.2). In other words, is the solution of (1.1) and (1.2), where and is given by (4.3). The proof is complete.
Theorem 4.6. Assume that is the solution of (3.1) and is the difference between the approximate solution and the exact solution . Then, is monotonically decreasing in the sense of the norm of .
Proof. It obvious that and . Thus, ; consequently, the difference is monotonically decreasing in the sense of . So, the proof of the theorem is complete.
5. Numerical Examples
In this section, some numerical examples are studied to demonstrate the accuracy and applicability of the present method. Results obtained are compared with the exact solution of each example and are found to be in good agreement with each other. In the process of computation, all the symbolic and numerical computations were performed by using Mathematica software package.
Example 5.1. Consider the nonlinear Fredholm-Volterra IDE: where . The exact solution is .
Using RKHS method, taking , , with the reproducing kernel function on , the approximate solution is calculated by (4.4). The numerical results at some selected grid points for and are given in Table 1.
As we mention, we used the grid nodes mentioned earlier in order to obtain approximate solutions. Moreover, it is possible to pick any point in and as well the approximate solution and its derivative will be applicable. Next, the numerical results for Example 5.1 at some selected gird nodes in of are given in Table 2.
Table 3 shows, a comparison between the absolute errors of our method together with triangular functions method [6], operational matrix with block-pulse functions method [7], and Hybrid Legendre polynomials and block-pulse functions method [8]. As it is evident from the comparison results, it was found that our method in comparison with the mentioned methods is better with a view to accuracy and utilization.
Example 5.2. Consider the nonlinear Fredholm-Volterra IDE: where . The exact solution is .
Using RKHS method, taking , , with the reproducing kernel function on , the approximate solution is calculated by (4.4). The numerical results at some selected grid points for and are given in Table 4.
The comparison among the RKHS solution besides the solutions of triangular functions [6], operational matrix with block-pulse functions solution [7], and exact solutions are shown in Table 5.
Example 5.3. Consider the nonlinear Fredholm-Volterra IDE: where . The exact solution is .
Using RKHS method, taking , , with the reproducing kernel function on , the approximate solution is calculated by (4.4). The numerical results at some selected grid points for and are given in Table 6.
Example 5.4. Consider the nonlinear Fredholm-Volterra IDE: where . The exact solution is .
Using RKHS method, taking , , with the reproducing kernel function on , the approximate solution is calculated by (4.4). The numerical results at some selected grid points for and are given in Table 7.
Example 5.5. Consider the nonlinear Fredholm-Volterra IDE:
where . The exact solution is .
Using RKHS method, taking , with the reproducing kernel function on , the approximate solution is calculated by (4.4). The numerical results at some selected grid points for and are given in Table 8.
6. Conclusion
In this paper, the RKHS method was employed to solve the nonlinear Fredholm-Volterra IDEs (1.1) and (1.2). The solution and the approximate solution are represented in the form of series in the space . Moreover, the approximate solution and its derivative converge uniformly to the exact solution and its derivative, respectively. Meanwhile, the error of the approximate solution is monotonically decreasing in the sense of the norm of .