In this paper, we consider a variant of projected Tikhonov regularization method for solving Fredholm integral equations of the first kind. We give a theoretical analysis of this method in the Hilbert space \(L^{2}(a,b)\) setting and establish some convergence rates under certain regularity assumption on the exact solution and the kernel \(k(\cdot,\cdot)\). Some numerical results are also presented.
Hinweise
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
1 Introduction
Let \(H=L^{2}((a,b);\mathbb{R})\) and consider the Fredholm integral equation of the first kind
where \(k(\cdot,\cdot)\) and g are known functions, and f is the unknown function to be determined. The equation can be written as an operator equation
Many inverse problems in applied science and engineering (see, e.g., [1‐3] and references therein) lead to the solution of Fredholm integral equations of the first kind (1).
Anzeige
Several numerical methods are available in the literature to solve linear integral equations of the first kind; we can cite, for example, multiscale methods [4‐8], spectral-collocation methods [9, 10], reproducing kernel Hilbert space methods [11, 12], eigenvalue approximation methods [13‐17], quadrature-based collocation methods [18, 19], projections methods [20‐28], and other interesting methods quite exposed in the books [13, 29‐36].
In the regularizing procedures, several authors have studied finite-dimensional approximations obtained by projecting regularized approximations into finite-dimensional subspaces. Such methods may be called regularization projection methods.
The main idea of regularization by projection is to project the least squares minimization on a finite-dimensional subspace to obtain a well-conditioned problem and, consequently, a stabilization of the generalized inverse of the approximate operator. We can distinguish two different cases of regularization by projection. The first one is the regularization in preimage space, and the second is regularization in image space; see, for example, [21, 22, 26, 27, 33, 37].
Following the idea developed in [18, 19] and [26, 27], we analyze a variant of projected Tikhonov regularization method applied to our problem (2) in the Hilbert space \(L^{2}(a,b)\) setting. We develop the theoretical framework of this method of approximation and give some results of convergence under certain conditions of regularity on the kernel \(k(\cdot,\cdot)\) and the solution of the problem in question.
Anzeige
More precisely, we build a method of projection by using very simple mathematical tools, which can be concretized and implemented numerically. Moreover, we give natural conditions on the kernel \(k(\cdot ,\cdot)\) of the operator K, which enables us to establish the convergence results of this approach. For the subspace of projection, we use the Legendre polynomials, which are well studied in the literature compared to other classes of polynomials. This judicious choice also enables us to give a simple calculation and explicit formula of approximation of \(K^{*}K\) (see (25)). It is important to note that in [27], the author gives sufficient conditions on \(\Vert A-A_{n}\Vert \) within an abstract framework to establish the convergence of this approximation, which returns an approach very limited in practice; moreover, it is not exploitable from the numerical point of view.
In this investigation, we assume that
(A1)
\(k(\cdot,\cdot)\) is nondegenerate.
(A2)
\(k(\cdot,\cdot)\in L^{2}((a,b)\times(a,b);\mathbb{R})\), that is, \(\kappa^{2}=\int_{a}^{b}\int_{a}^{b}\vert k(t,s)\vert ^{2}\,dt\,ds < +\infty\).
It is well known that under these conditions, K is a compact (Hilbert-Schmidt) integral operator with infinite-dimensional range (\(\operatorname {dim}(\mathcal{R}(K))=+\infty\)). In this case, \(\mathcal{R}(K)\) is not closed, and problem (2) belongs to the class of ill-posed problems. The ill-posedness character means that \(T^{\dagger}\) (the Moore-Penrose inverse) or \(K^{-1}\) (when K is injective) are unbounded operators. Consequently, the standard numerical procedures to solve such equations are unstable and pose very serious problems when the data are not exact; that is, small perturbations of the observation data may lead to large changes on the considered solution.
To overcome this difficulty and for obtaining stable approximate solutions for ill-posed problems, regularization procedures are employed, and Tikhonov regularization is one the such procedure. This method consists in minimizing over H the so-called Tikhonov functional
where \(\alpha> 0\) is the regularization parameter. The regularized solution \(f_{\alpha}\) is the unique minimizer of the Tikhonov functional \(\Phi_{\alpha}(f)\). We denote this minimum by \(f_{\alpha}= \operatorname{\arg\min}_{f\in H}\Phi_{\alpha }(f)\), which is a unique solution of the normal equation
$$\bigl(\alpha I +K^{*}K \bigr)f=K^{*}g. $$
The linear operator \(R(\alpha)= (\alpha I +K^{*}K)^{-1}K^{*}\in\mathcal {L}(H)\) is called a regularizing operator, and we have
To establish the main results of our work, we introduce the following assumptions:
(H1)
The operator K is injective, that is, \(N(K)=\{0\}\).
(H2)
\(g\in\mathcal{R}(K)\).
(H3)
The kernel \(k(\cdot,\cdot)\in\mathcal{C}^{r}([a,b]\times [a,b]; \mathbb{R})\), \(r\in\mathbb{N}\).
(H4)
The operator \(K^{*}\) is injective (\(\Longleftrightarrow \overline{\mathcal{R}(K)}=H\)).
2 Preliminaries and notation
In this section, we present the notation and functional setting and prepare some material, which will be used in our analysis. For more details, we refer the reader to [32, 36, 38].
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces. We denote by \(\mathcal{L}(H_{1}, H_{2})\) the space of all bounded linear operators from \(H_{1}\) to \(H_{2}\) (and \(\mathcal{L}(H)\) if \(H_{1}=H_{2}=H\)) with the operator norm
The null-space of \(T\in\mathcal{L}(H_{1}, H_{2})\) is the set \(\mathcal {N}(T)=\{u\in H_{1}: Tu=0\}\), whereas the range of T is denoted by \(\mathcal{R}(T)=T(H_{1})=\{v= Tu, u\in H_{1}\}\).
Let \(T\in\mathcal{L}(H_{1}, H_{2})\). Recall that, for \(v\in H_{2}\), the linear operator equation
$$ Tu=v $$
(5)
has a solution if and only if \(v\in\mathcal{R}(T)\).
• If \(\mathcal{R}(T)\) is infinite-dimensional and T is injective, then \(T^{-1}:\mathcal{R}(T)\longrightarrow H_{1}\) is bounded if and only if \(R(T)\) is closed.
• If \(v\notin \mathcal{R}(T)\), then we look for an element \(\hat{u}\in H_{1}\) such that Tû is “closest to” v in the sense that û minimizes the functional \(\Vert Tu-v\Vert _{H_{2}}\).
Definition 2.1
Let \(T\in\mathcal{L}(H_{1}, H_{2})\). We call \(\hat{u}\in H_{1}\) a least residual norm solution (LRN solution) of (5) if
Let \(v\in G=\mathcal{R}(T)+\mathcal{R}(T)^{\perp}\). Then \(u^{\dagger}\in S_{v}\) is called a best approximate solution (generalized solution) of (5) if \(\Vert u^{\dagger} \Vert _{H_{1}}=\inf_{\hat{u}\in S_{v}} \Vert \hat{u}\Vert _{H_{1}}\).
Theorem 2.1
Let\(v\in G=\mathcal{R}(T)+\mathcal {R}(T)^{\perp}\). Then there exist a unique\(x^{\dagger}\in S_{v}\)such that
where\(P: H_{1}\longrightarrow\mathcal{N}(A)^{\perp}\)is the orthogonal projection onto\(\mathcal{N}(A)^{\perp}\)and\(\hat{u}_{0}\)is any element in \(S_{v}\).
Definition 2.4
The Moore-Penrose (generalized) inverse \(T^{\dagger}: D(T^{\dagger})\longrightarrow H_{1}\) of T defined on the dense domain \(D(T^{\dagger})=\mathcal{R}(T)+\mathcal{R}(T)^{\perp} \) maps \(v\in D(T^{\dagger })\) to the best-approximate solution of (5), that is, \(T^{\dagger }v= u^{\dagger}\).
Remark 2.1
\(T^{\dagger}= T^{-1}\) if \(\mathcal{R}(T)^{\perp}=\{0\}\) and \(\mathcal {N}(T)=\{0\}\).
\(T^{\dagger}\) is continuous if and only if \(\mathcal{R}(T)\) is a closed subspace of \(H_{2}\).
LetE, Fbe two Banach spaces, and\((T_{n})\subset\mathcal{L}(E, F)\). Then, \(T_{n}\longrightarrow T\in\mathcal{L}(E, F)\)pointwise (i.e., \(T_{n} x\longrightarrow Tx\)for all\(x\in E\)) if and only if the sequence\((T_{n})\)is uniformly bounded, and\(T_{n} x\longrightarrow Tx\)for all\(x\in\mathcal{D}\), where\(\mathcal{D}\subset E\)is a dense subspace of E.
We denote by \((\lambda_{i}, e_{i})_{i=0}^{\infty}\) the normalized eigensystem of the compact self-adjoint operator \(A=K^{*}K\). Then A can be diagonalized according to the following formula:
The classical Legendre polynomials \((L_{j} )_{j\in\mathbb{N}}\) are defined on the interval \([-1, 1]\) and can be determined with the aid of the following recurrence formulae:
In order to use these polynomials on the interval \([a, b]\), we define the so-called normalized shifted Legendre polynomials of degree n as follows: Let \(x\in[a,b]\); then the transformation \(y= \frac {2}{b-a}x-\frac{a+b}{b-a}\) transforms the interval \([a,b]\) onto \([-1, 1]\) and the normalized shifted Legendre polynomials are given by
Let \(H_{n}= \operatorname {span}\{\hat{L}_{j}, j=0,1,\ldots,n\}\) be the sequence of Legendre polynomial subspaces of degree ≤n, and let \(\Pi_{n}: H\longrightarrow H_{n}\) be the orthogonal projection defined as
$$ \Pi_{n}h=\sum_{j=0}^{n}c_{j}(h) \hat{L}_{j},\quad h\in H. $$
(11)
We quote some crucial properties of \(\Pi_{n}\) ([38], pp.283-287 and [39]).
Lemma 3.1
Let\(\Pi_{n}\)be the orthogonal projection defined in (11). Then we have
wherecis a positive constant independent ofn, andris a positive integer.
Remark 3.1
Let \(\mathbb{N}=\{0,1,2,\ldots\}\) and \(r\in\mathbb{N}\).
1.
If \(k(\cdot,\cdot)\in\mathcal{C}^{r}([a,b]\times [a,b];\mathbb{R})\), then \(\mathcal{R}(K)\subset\mathcal {C}^{r}([a,b]\times[a,b];\mathbb{R})\). Further, denoting \(D_{i,j}k(t,s)=\frac{\partial^{i+j}}{\partial t^{i}\,\partial s^{j}} k(t,s)\), we have
In practice, ill-posed problems like integral equations of the first kind have to be approximated by a finite-dimensional problem whose solution can be easy calculated by using some numerical computation software.
In this paper, we replace the original problem \(Kf=g\) by an algebraic system \(K_{n}f^{n}=g_{n}\) posed on \(\mathbb{R}^{n+1}\), where the Moore-Penrose generalized inverse \(K_{n}^{\dagger}\) is defined for every data \(g_{n}\in\mathbb{R}^{n+1}\).
We define the linear operator \(\mathbb{Q}_{n}: H=L^{2}(a,b)\longrightarrow\mathbb{R}^{n+1}\):
Let\(K_{n}: H=L^{2} ((a,b);\mathbb{R} )\longrightarrow\mathbb{R}^{n+1}\)be given by formula (20). Then, \(K_{n}\)is a bounded operator, and the adjoint\(K_{n}^{*}: \mathbb{R}^{n+1} \longrightarrow H=L^{2} ((a,b);\mathbb{R} )\)of\(K_{n}\)is given by
which implies that \(K_{n} \in\mathcal{L}(H, \mathbb{R}^{n+1})\) and \(\Vert K_{n}\Vert \leq\kappa= (\int_{a}^{b}\int_{a}^{b}\vert k(t,s)\vert ^{2}\,dt\,ds )^{\frac{1}{2}}\).
(2) By definition of \(\mathbb{Q}_{n}: H=L^{2}(a,b)\longrightarrow\mathbb{R}^{n+1}\) it is easy to check that \(\mathbb{Q}_{n} \in\mathcal{L}(H, \mathbb{R}^{n+1})\). Thus, we can define its adjoint operator \(\mathbb{Q}_{n}^{*}: \mathbb{R}^{n+1} \longrightarrow H=L^{2}(a,b)\). Now, for
Now, we are in the position to prove our main results. In the following theorem, we show the convergence of \(A_{n}\) to A and also other regularizing properties of \(A_{n}\).
Theorem 3.2
Let\(A= K^{*}K\)and\(A_{n}= K_{n}^{*}K_{n}\)be given by expression (25). Then, under the assumption
In view of Theorem 2.2 and Remark 3.3, to show the convergence result (33), it suffices to establish the result for \(h\in\mathcal{R}(K^{*}K)\). Before starting the demonstration, we introduce the following propositions.
If we choose the parameter α such that \(\alpha^{2}\frac {1}{\lambda_{N}^{2}}\Vert h\Vert _{H}^{2}\leq\frac{\varepsilon }{2}\), then we obtain the desired convergence. □
and from (30) and (34) we deduce that this last inequality tends to 0 for all \(h\in\mathcal{R}(A)\). □
4 Convergence and error analysis
We denote by \(R(\alpha)= (\alpha I +K^{*}K)^{-1}K^{*}\in\mathcal {L}(H)\) (resp. \((\alpha I+ K_{n}^{*}K_{n})^{-1}K_{n}^{*}\)) the regularizing operator of K (resp. of \(K_{n}\)).
To establish the convergence results of this method, we point out the following results:
Because our original problem (\(Kf=g\)) is ill-posed, the problem of finding the generalized solution \(f^{\dagger,\delta, n}= K_{n}^{\dagger }g_{n}^{\delta}\in\mathcal{N}(K_{n})^{T}\) of problem (42) with inexact data \(g_{n}^{\delta}\) is instable. Regularizing this equation by the Tikhonov regularization method, we obtain
Denote by \(f =K^{-1}g\) the exact solution of (2), by \(f_{\alpha}\) (resp. \(f_{\alpha}^{\delta}\)) the regularized solution of (44) (resp. of (45)), and by \(f_{\alpha}^{n}\) (resp. \(f_{\alpha }^{n,\delta}\)) the regularized solution of (46) (resp. of (47)).
Definition 4.1
We denote by \(f_{\alpha}\) (resp. \(f_{\alpha}^{\delta}\)) the regularized solution of problem (2) for the exact data g (resp. for the inexact data \(g_{\delta}\)):
$$\begin{aligned}& f_{\alpha}=R(\alpha)g= \bigl(\alpha I +K^{*}K \bigr)^{-1}K^{*}g, \end{aligned}$$
(48)
$$\begin{aligned}& f_{\alpha}^{\delta}=R(\alpha)g_{\delta}= \bigl( \alpha I +K^{*}K \bigr)^{-1}K^{*}g_{\delta}. \end{aligned}$$
(49)
Definition 4.2
For any \(\alpha>0\), the unique solution \(f_{\alpha}^{\delta,n}\) of (47) is considered as a regularized solution of \(f^{\dagger,n,\delta}\).
Remark 4.1
Without loss of generality, we can assume that \(\operatorname {dim}(\mathcal{R}( K_{n}^{*}) ) =n+1\). For example, under condition (H4), the vectors \(K^{*}\hat{L}_{j}\), \(i=0,1,\ldots,n\), are linearly independent, and consequently \(\operatorname {dim}(\mathcal{R}( K_{n}^{*}) )=n+1\).
Since \(f_{\alpha}^{\delta,n}\in\mathcal{R}( K_{n}^{*}K_{n})= \mathcal{R}( K_{n}^{*}) = \operatorname {span}\{K^{*}\hat{L}_{j}, i=0,1,\ldots,n\} \) (see (26)), \(f_{\alpha}^{\delta,n}\) can be expanded as
that is, \(S(\cdot,\cdot)\) is a positive symmetric bilinear form. Hence, \(\mathbf{B}=(b_{ij})_{0\leq i,j\leq n}=(S(\hat{L}_{i},\hat {L}_{j}))_{0\leq i,j\leq n}\) is a positive symmetric matrix, and for any \(\alpha> 0\), the matrix \(\mathbf{A}_{n}(\alpha)\) of system (56) is invertible. Therefore, this system is uniquely solvable. □
The aim of this part is to derive the convergence and error bound for \(\Vert f-f_{\alpha}^{n,\delta} \Vert _{L^{2}(a,b)}\). To do this, we split the error into three parts:
In this section, we consider the determination of \(\alpha(\delta)\) from the discrepancy principle of Morozov. The discrepancy principle (DP) suggests computing \(\alpha (\delta)>0\) such that
where \(\eta\in [ 1,\infty ] \). Obviously, the classical Morozov principle (63) is a particular case of the damped case with \(\eta=\infty\).
In [40, 41], the authors propose a cubically convergent algorithm for choosing a reasonable regularization parameter. This algorithm is summarized as follows.
Algorithm of the cubic Morozov discrepancy principle (CMDP)
Step 1.
Input \(\alpha_{0}>0\), \(\delta>0\), \(\epsilon(\text{tolerance}) >0\), \(l_{\max}\), set \(l:=0\).
Step 2.
Compute \(f_{\alpha_{l}}^{n,\delta}\), \(\frac {d}{d\alpha}f_{\alpha_{l}}^{n,\delta}\), and \(\frac{d^{2}}{d\alpha ^{2}}f_{\alpha_{l}}^{n,\delta}\).
Step 3.
Compute \(\Phi(\alpha_{l})\), \(\Phi'(\alpha_{l})\), and \(\Phi''(\alpha_{l})\) from formula (65).
Step 4.
Solve for \(\alpha_{l+1}\) from iterative formulas (65), (70), and (71).
Step 5.
If \(\vert \alpha_{l+1}-\alpha_{l}\vert \leq \epsilon\) or \(l=l_{\max}\), STOP; otherwise, set \(l=l+1\), GOTO step 2.
where the matrix \(\mathbf{A}_{n}(\alpha)\) is given by (55).
Remark 4.2
We note that formula (74) provides us a practical method to calculate expression (72).
5 Numerical tests
The purpose of this final section is to illustrate this theoretical study with two numerical examples. The numerical experiments are completed with MATLAB.
Let \(\{t_{i}=a+ \frac{(i-1)(b-a)}{N}, i=1,2,\ldots,N+1\}\subset[a,b]\) the collocation points of the trapezoidal quadrature formula. The trapezoidal quadrature rule associated with these collocation points has the weights \(\omega_{1} = \omega_{N+1}=\frac{b-a}{2N}\), \(\omega _{i}=\frac{b-a}{N}\), \(i=2,3,\ldots,N\).
the discrete datum of g. Adding a random distributed perturbation (obtained by the Matlab command randn) to each data function, we obtain the vector \(\mathbf{g}^{\delta}\):
where ε indicates the noise level of the measurement data, and the function “\(\operatorname {randn}(\cdot)\)” generates arrays of normally distributed random numbers with mean 0, variance \(\sigma^{2} = 1\), and standard deviation \(\sigma= 1\). “\(\operatorname {randn}(\operatorname {size}(g))\)” returns an array of random entries of the same size as g. The bound on the measurement error δ can be measured in the sense of root mean square error (RMSE) according to
The authors thank the editor and the anonymous referees for their valuable comments and helpful suggestions that improved the quality of their article.
This work is supported by the MESRS of Algeria (CNEPRU Project B01120090003).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
The projected Tikhonov regularization method developed and used in this investigation to solve the Fredholm integral equations of the first kind is very simple and effective, owing to the fact that the dimension of the subspace of projection is very small (\(n=2,3\)); moreover, the regularized solution remains stable for a strong noise (\(\varepsilon =1/100\)) and for regular data.