1 Introduction
In 1994, Censor and Elfving [
1] first introduced the split feasibility problem (SFP) in finite-dimensional Hilbert spaces for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [
2]. It was found that the SFP can also be used to model intensity-modulated radiation therapy (IMRT) (see [
3‐
6]). Very recently, Xu [
7] considered the SFP in the framework of infinite-dimensional Hilbert spaces. In this setting, the SFP is formulated as the problem of finding a point
with the property
(1.1)
where C and Q are the nonempty closed convex subsets of the infinite-dimensional real Hilbert spaces and , respectively. Let , where denotes the family of all bounded linear operators from to .
We use Γ to denote the solution set of the SFP,
i.e.,
Assume that the SFP is consistent (
i.e., (1.1) has a solution) so that Γ is closed, convex and nonempty. A special case of the SFP is the following convex constrained linear inverse problem:
(1.2)
which has extensively been investigated by using the Landweber iterative method [
8]:
Comparatively, the SFP has received much less attention so far due to the complexity resulting from the set
Q. Therefore, whether various versions of the projected Landweber iterative method [
8] can be extended to solve the SFP remains an interesting open topic.
The original algorithm given in [
1] involves the computation of the inverse
(assuming the existence of the inverse of
A):
where are closed convex sets, A is a full rank matrix and , and thus has not become popular.
A more popular algorithm that solves the SFP seems to be the
CQ algorithm of Byrne [
2,
9] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [
10]. The
CQ algorithm only involves the computations of the projections
and
onto the sets
C and
Q, respectively, and is therefore implementable in the case where
and
have closed-form expressions (for example,
C and
Q are closed balls or half-spaces). It remains, however, a challenge on the
CQ algorithm in the case where the projection
and/or
fail to have closed-form expressions though theoretically we can prove the (weak) convergence of the algorithm.
Recently, Xu [
7] gave a continuation of the study on the
CQ algorithm and its convergence. He applied Mann’s algorithm to the SFP and proposed an averaged
CQ algorithm, which was proved to be weakly convergent to a solution of the SFP. He derived a weak convergence result, which shows that for suitable choices of iterative parameters (including the regularization), the sequence of iterative solutions can converge weakly to an exact solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained. Later, Deepho and Kumam [
11] extended the results of Xu [
7] by introducing and studying the modified Halpern iterative scheme for solving the split feasibility problem (SFP) in the setting of infinite-dimensional Hilbert spaces.
Throughout this paper, we always assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let
be a continuous differentiable function. The minimization problem
(1.3)
is ill-posed. Therefore (see [
7]), consider the following Tikhonov regularized problem:
(1.4)
where is the regularization parameter.
We observe that the gradient
(1.5)
is -Lipschitz continuous and α-strongly monotone.
Define the Picard iterates
(1.6)
Xu [
7] showed that if SFP (1.1) is consistent, then as
,
and consequently the strong
exists and is the minimum-norm solution of the SFP. Note that (1.6) is double-step iteration. Xu [
7] further suggested the following single step regularized method:
(1.7)
He proved that the sequence
generated by (1.7) converges in norm to the minimum-norm solution of the SFP provided the parameters
and
satisfy the following conditions:
(i)
and ;
(iii)
.
Motivated by the idea of the relaxed extragradient method and Xu’s regularization, Ceng, Ansari and Yao [
12] presented the following relaxed extragradient method with regularization for finding a common element of the solution set of the split feasibility problem and the set
of fixed points of a nonexpansive mapping
S:
(1.8)
They only obtained the weak convergence of iterative algorithm (1.8).
The purpose of this paper to study and analyze an relaxed extragradient method with regularization for finding a common element of the solution set Γ of the SFP and the set solutions of fixed points for asymptotically quasi-nonexpansive mappings and a Lipschitz continuous mapping in a real Hilbert space. We prove that the sequence generated by the proposed method converges weakly to an element in .
2 Preliminaries
We first recall some definitions, notations, and conclusions which will be needed in proving our main results. Let
H be a real Hilbert space with the inner product
and
and let
C be a closed and convex subset of
H. Let
E be a Banach space. A mapping
is said to be
demi-closed at origin if for any sequences
with
and
,
. A Banach space
E is said to have
the Opial property if for any sequence
with
,
Remark 2.1 It is well known that each Hilbert space possesses the Opial property.
Definition 2.2 Let
H be a real Hilbert space, let
C be a nonempty and closed convex subset. We denote by
the set of fixed points of
T, that is,
. Then
T is said to be
(i)
nonexpansive if for all ;
(ii)
quasi-nonexpansive if for all and ;
(iii)
asymptotically nonexpansive if there exist a sequence
and
such that
for all
and
;
(iv)
asymptotically quasi-nonexpansive if there exist a sequence
and
such that
for all
,
and
;
(v)
uniformlyL-
Lipschitzian if there exists a constant
such that
for all and .
Remark 2.3 By the above definitions, it is clear that:
(i)
a nonexpansive mapping is an asymptotically quasi-nonexpansive mapping;
(ii)
a quasi-nonexpansive mapping is an asymptotically-quasi nonexpansive mapping;
(iii)
an asymptotically nonexpansive mapping is an asymptotically quasi-nonexpansive mapping.
Proposition 2.4 (see [
9])
We have the following assertions.
(i)
Tis nonexpansive if and only if the complementis-ism.
(ii)
IfTisν-ism and, thenγTis-ism.
(iii)
Tis averaged if and only if the complementisν-ism for some.
Indeed, for, Tisα-averaged if and only ifis-ism.
Proposition 2.5 (see [
9,
13])
We have the following assertions.
(i)
Iffor some, Sis averaged andVis nonexpansive, thenTis averaged.
(ii)
Tis firmly nonexpansive if and only if the complementis firmly nonexpansive.
(iii)
Iffor some, Sis firmly nonexpansive andVis nonexpansive, thenTis averaged.
(iv)
The composite of finite many averaged mappings is averaged. That is, if each of the mappingsis averaged, then so is the composite. In particular, ifis-averaged andis-averaged, where, then the compositeisα-averaged, where.
(v)
If the mappingsare averaged and have a common fixed point,
then
Lemma 2.6 (see [
14], demiclosedness principle)
LetCbe a nonempty closed and convex subset of a real Hilbert spaceHand letbe a nonexpansive mapping with. If the sequenceconverges weakly toxand the sequenceconverges strongly toy, then; in particular, if, then.
Let the sequences
and
of real numbers satisfy
where,
and.
Then(2)
if, then.
The following lemma gives some characterizations and useful properties of the metric projection in a Hilbert space.
For every point
, there exists a unique nearest point in
C, denoted by
, such that
(2.1)
where is called the metric projection ofH onto C. We know that is a nonexpansive mapping of H onto C.
Proposition 2.8For givenand:
(i)
if and only iffor all.
(ii)
if and only iffor all.
(iii)
For all, .
LetHbe a real Hilbert space.
Then the following equations hold:
(i)
for all;
(ii)
for alland.
Let
K be a nonempty closed convex subset of a real Hilbert space
H and let
be a monotone mapping. The variational inequality problem (VIP) is to find
such that
The solution set of the VIP is denoted by
. It is well known that
A set-valued mapping
is called
monotone if for all
,
and
imply
A monotone mapping
is called
maximal if its graph
is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping
T is maximal if and only if for
,
for every
implies
. Let
be a monotone and
k-Lipschitz continuous mapping and let
be the normal cone to
K at
, that is,
Define
Then
T is maximal monotone and
if and only if
; see [
15] for more details.
We can use fixed point algorithms to solve the SFP on the basis of the following observation.
Let
and assume that
. Then
, which implies that
, and thus
. Hence, we have the fixed point equation
. Requiring that
, we consider the fixed point equation
(2.2)
It is proved in [[
7], Proposition 3.2] that the solutions of fixed point equation (
2.2) are exactly the solutions of the SFP; namely, for given
,
solves the SFP if and only if
solves fixed point equation (
2.2).
Proposition 2.10 (see [
12])
Given,
the following statements are equivalent.
(ii)
solves fixed point equation (2.2);
(iii)
solves the variational inequality problem (
VIP)
of findingsuch that (2.3)
whereandis the adjoint ofA.
Proof (i) ⇔ (ii). See the proof in [[
7], Proposition 3.2].
(ii)
⇔ (iii). Observe that
where . □
Remark 2.11 It is clear from Proposition 2.10 that
for any , where and denote the set of fixed points of and the solution set of VIP.
3 Main result
Theorem 3.1LetCbe a nonempty,
closed,
and convex subset of a real Hilbert spaceHand letbe a uniformlyL-
Lipschitzian and asymptotically quasi-
nonexpansive mappings withandfor allsuch that.
Letandbe the sequences inCgenerated by the following algorithm:
(3.1)
where,
and three sequences,
,
andsatisfy the conditions:
(ii)
for someand,
(iii)
for some.
Then the sequencesandconverge weakly to an element.
Proof We first show that
is
ζ-averaged for each
, where
Indeed, it is easy to see that
is
-ism, that is,
Observe that
Hence, it follows that
is
-ism. Thus,
is
-ism. By Proposition 2.4(iii) the composite
is
-averaged. Therefore, noting that
is
-averaged and utilizing Proposition 2.5(iv), we know that for each
,
is
ζ-averaged with
This shows that
is nonexpansive. Furthermore, for
with
, utilizing the fact that
, we may assume that
Consequently, it follows that for each integer
,
is
-averaged with
This immediately implies that is nonexpansive for all .
We divide the remainder of the proof into several steps.
Step 1. We prove that
is bounded. Indeed, we take a fixed
arbitrarily. Then we get
for
. Since
and
are nonexpansive mappings, then we have
(3.2)
Observe that
(3.3)
Since
, according to Lemma 2.7 and (i), (ii) and (3.3), we obtain that
(3.4)
This implies that is bounded and is also bounded.
It follows that
Hence is bounded.
Step 2. We prove that
In fact, it follows from (3.2) that
where .
It follows that
Also, observe that
Hence, we have
(3.5)
By the conditions (i), (iii) and
, we can conclude that
(3.6)
Consider that since
and by Proposition 2.8(ii), we have
(3.7)
Consequently, utilizing Lemma 2.9(ii) and (3.7), we conclude that
It follows that we get
(3.8)
So, taking
, since
, (i)-(iii), (3.6) and (3.8), we can conclude that
(3.9)
Consider
(3.10)
From (3.6) we obtain
(3.11)
Observe that
So, from (3.6) and (3.9), we get
(3.12)
We compute that
From the conditions (i), (ii) and (3.11), we obtain that
(3.13)
Since
T is uniformly
L-Lipschitzian continuous, then
Since
and
, it follows that
(3.14)
Step 3. We show that .
Since
is Lipschitz continuous, from (3.9), we have
Since is bounded, there is a subsequence of that converges weakly to some .
First, we show that . Since , it is known that .
Put
where
. Then
S is maximal monotone and
if and only if
; (see [
17]) for more details. Let
, we have
So, we have
On the other hand, from
we have
and
Therefore, from
and
, it follows that
Hence, we obtain
Since S is maximal monotone, we have , and hence . Thus, it is clear that .
Next, we show that . Indeed, since and by (3.14) and Lemma 2.6, we get . Therefore, we have .
Let
be another subsequence of
such that
. Then
. Let us show that
. Assume that
. From the Opial condition [
18], we have
This is a contradiction. Thus, we have
. This implies
Further, from , it follows that . This shows that both sequences and converge weakly to . This completes the proof. □
Utilizing Theorem 3.1, we have the following new results in the setting of real Hilbert spaces.
Take in Theorem 3.1. Therefore the conclusion follows.
Corollary 3.2LetCbe a nonempty,
closed,
and convex subset of a real Hilbert spaceH.
Suppose that.
Letbe a sequence inCgenerated by the following algorithm:
(3.15)
where,
and the sequences,
,
andsatisfy the conditions:
(ii)
for someand,
(iii)
for some.
Then the sequenceconverges weakly to an element.
Take in Theorem 3.1. Therefore the conclusion follows.
Corollary 3.3LetCbe a nonempty,
closed,
and convex subset of a real Hilbert spaceHand letbe a uniformlyL-
Lipschitzian and quasi-
nonexpansive mapping withandfor allsuch that.
Letbe the sequence inCgenerated by the following algorithm:
(3.16)
and let the sequencesatisfy the conditionfor some. Then the sequenceconverges weakly to an element.
Remark 3.4 Theorem 3.1 improves and extends [[
7], Theorem 5.7] in the following aspects:
(a)
The iterative algorithm [[
7], Theorem 5.7] is extended for developing our relaxed extragradient algorithm with regularization in Theorem 3.1.
(b)
The technique of proving weak convergence in Theorem 3.1 is different from that in [[
7], Theorem 5.7] because of our technique to use asymptotically quasi-nonexpansive mappings and the property of maximal monotone mappings.
(c)
The problem of finding a common element of
for asymptotically quasi-nonexpansive mappings which is more general than that for nonexpansive mappings and the problem of finding a solution of the SFP in [[
7], Theorem 5.7].
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.