1 Introduction
The variational inequality problem (VIP) which was first introduced by Hartman and Stampacchia [
1] in 1966, is a very important tool in studying engineering mechanics, physics, economics, optimization theory and applied sciences in a unified and general framework (see [
2,
3]). Under appropriate conditions, there are two general approaches for solving the variational inequality problem, one is the regularized method and the other is the projection method. Many projection-type algorithms for solving the variational inequalities problem have been proposed and analyzed by many authors [
4‐
22]. The gradient method is the simplest algorithm in which only one projection on feasible set is performed, and the convergence of the method requires a strongly monotonicity. To avoid the hypothesis of the strongly monotonicity, Korpelevich [
4] proposed an algorithm for solving the variational inequalities in Euclidean space, which was called the extragradient-type method. The subgradient extragradient-type algorithm was introduced by Censor et al. in [
5] for solving variational inequalities in real Hilbert space. Yao et al. in [
6] proposed an iterative algorithm for solving a common solution of the pseudomonotone variational inequalities and fixed point of pseudocontractive operators in Hilbert spaces.
In the past, most variational inequalities were in Euclidean or Hilbert space, recently, extragradient-type method was extended from Hilbert spaces to Banach spaces (see [
23‐
27]). In [
23] they used the subgradient extragraduent method and Halpern method to propose an algorithm for solving variational inequalities in Banach spaces. In [
24], they proposed a splitting algorithm for finding a common zero of a finite family of inclusion problems of accretive operators in Banach space. Inspired by the work mentioned, in this work, we extend subgradient extragradient algorithm proposed by [
8] for solving variational inequalities from Hilbert spaces to Banach spaces. It is worth stressing that our algorithm has a simple structure and the convergence of algorithms is not required to know the Lipschitz constant of the mapping. The paper is organized as follows. In Sect.
2, we present some preliminaries that will be needed in the sequel. In Sect.
3, we propose an algorithm and analyze its convergence. Finally, in Sect.
4 we present a numerical example and comparison.
2 Mathematical preliminaries
This section we will introduce some definitions and basic results that will be used in our paper. Assume that
X is a real Banach space with its dual
\(X^{\ast }\),
\(\| \cdot \| \) and
\(\| \cdot \| _{\ast }\) denote the norms of
X and
\(X^{\ast }\), respectively,
\(\langle x, x^{\ast } \rangle \) the duality coupling in
\(X\times X^{\ast }\) for all
\(x^{\ast } \in X^{\ast }\) and
\(x\in X\),
\(x_{n}\longrightarrow x\) strong convergence of a sequence
\(\{ x_{n}\}\) of
X to
\(x\in X\),
\(x_{n}\rightharpoonup x \) weak convergence of a sequence
\(\{ x_{n}\}\) of
X to
\(x\in X\).
\(S_{X}\) denote the unit sphere of
X, and
\(B_{X}\) the closed unit ball of
X. Let
C be a nonempty closed convex subset of
X, its closure be denoted by
C̄ and
\(F: C\longrightarrow X^{\ast }\) be a continuous mapping. Consider with the variational inequality (for short,
\(\operatorname{VI}(F, C)\)) which consists in finding a point
\(x \in C\) such that
$$ \bigl\langle F(x),y-x\bigr\rangle \geq 0, \quad \forall y\in C. $$
(1)
Let
S be the solution set of (
1). Finding a solution of
S is fundamental problem in optimization theory. It is well known that
x is the solution of the
\(\operatorname{VI}(F,C)\) if and only if
x is the solution of the fixed-point equation
\(x=P_{C}(x-\lambda F(x))\), where
λ is an arbitrary positive constant. Therefore, the knowledge of fixed-point algorithms can be used to solve
\(\operatorname{VI}(F, C)\).
We next recall some properties of the Banach space. Let X be a real Banach space and \(X^{*}\) be the corresponding dual space.
The
normalized duality mapping
\(J_{X}\) (usually written
J) of
X into
\(X^{*}\) is defined by
$$ J(x)=\bigl\{ x^{*}\in X^{*} | \bigl\langle x,x^{*} \bigr\rangle = \bigl\Vert x^{*} \bigr\Vert ^{2}= \Vert x \Vert ^{2} \bigr\} $$
for all
\(x\in X\). Let
\(q \in (0,2]\). The
generalized duality mapping
\(J_{q}:X \rightarrow 2^{X^{*}}\) is defined (for the definitions and properties, see [
24]) by
$$ J_{q}(x)=\bigl\{ j_{q}(x)\in X^{*} | \bigl\langle j_{q}(x),x \bigr\rangle = \Vert x \Vert \bigl\Vert j_{q}(x) \bigr\Vert , \bigl\Vert j_{q}(x) \bigr\Vert = \Vert x \Vert ^{q-1}\bigr\} $$
for all
\(x\in X\).
Let
\(U=\{x\in X: \|x\|=1 \}\). The norm of
X is said to be
Gâteaux differentiable if, for each
\(x, y\in U\), the limit
$$ \lim_{t\rightarrow 0}\frac{ \Vert x+ty \Vert - \Vert x \Vert }{t} $$
(4)
exists. In this case, the space
X is also called
smooth. We know that
X is smooth iff
J is a single-valued mapping of
X into
\(X^{*}\),
X is reflexive iff
J is surjective, and
X is strictly convex iff
J is one-to-one. Therefore, if
X is a smooth, strictly convex and reflexive Banach space, then
J is a single-valued bijection, and then there exist the inverse mapping
\(J^{-1}\) coincides with the duality mapping
\(J^{*}\) on
\(X^{*}\). More details can be found in [
29‐
31]. If (
4) converges uniformly in
\(x, y\in S_{X}\),
X is said to be uniformly smooth. It is said to be strictly convex if
\(\| \frac{x+ y}{2}\| <1\) whenever
\(x, y\in S_{X}\) and
\(x\neq y\). The modulus
\(\delta _{X}\) of convexity is defined by
$$ \delta _{X}(\varepsilon )=\inf \biggl\{ 1- \biggl\Vert \frac{x+ y}{2} \biggr\Vert \Big| x,y \in B_{X}, \Vert x-y \Vert \geq \varepsilon \biggr\} , $$
(5)
for all
\(\varepsilon \in [0,2]\). A Banach space
X is said to be
uniformly convex if
\(\delta _{X}(\varepsilon )> 0\). It is well known that a Banach space
X is uniformly convex if and only if for any two sequences
\(\{x_{n}\}\) and
\(\{y_{n}\}\) in
X such that
$$ \lim_{n\rightarrow \infty } \Vert x_{n} \Vert =\lim _{n\rightarrow \infty } \Vert y_{n} \Vert =1 \quad \text{and} \quad \lim_{n\rightarrow \infty } \Vert x_{n}+y_{n} \Vert =2,\qquad \lim_{n\rightarrow \infty } \Vert x_{n}-y_{n} \Vert =0 $$
hold. A uniformly convex Banach space is strictly convex and reflexive. By [
24] we know that a Banach space
X is smooth if and only if the duality mapping
\(J_{q}\) is single valued and is uniformly smooth if and only if the duality mapping
\(J_{q}\) is single valued and norm-to-norm uniformly continuous on bounded sets of
X. Moreover, if there exists
\(c>0\) such that, for all
\(\varepsilon \in [0,2]\),
\(\delta _{X}(\varepsilon )> c\varepsilon ^{2}\), then
X is said to be
2-uniformly convex. It is obvious that every 2-uniformly convex Banach space is uniformly convex and all Hilbert spaces are uniformly smooth and 2-uniformly convex, and therefore are reflexive.
Now, we recall some useful definitions and results. Firstly, let us introduce the generalized projection operator of X. Let \(C\subseteq X\) be a nonempty closed convex subset of a real uniformly convex Banach space X. Then we know that, for any \(z\in X\), there exists a unique element \(\tilde{z} \in C\) such that \(\|z-\tilde{z}\|\leq \|z-y\|\) for all \(y\in C\). Putting \(\tilde{z}=P_{C}z\), the operator \(P_{C}: X^{*} \longrightarrow C \subset X\) is called the generalized projection (or metric projection) operator of X onto C.
To avoid the hypothesis of the strongly monotonicity, Korpelevich [
4] give the extragradient-type method:
$$ y_{n}=P_{C}\bigl(x_{n}-\lambda F(x_{n})\bigr), \qquad x_{n+1}=P_{C} \bigl(x_{n}-\lambda F(y_{n})\bigr), $$
(6)
where
\(\lambda \in (0,\frac{1}{L})\).
The subgradient extragradient-type algorithm extend (
6) in which the second orthogonal projection onto some constructible set in Euclidean space for solving
\(\operatorname{VI}(F, C)\) in real Hilbert space. Their method is of the following form:
$$ y_{n}=P_{C}\bigl(x_{n}-\lambda F(x_{n})\bigr), \qquad x_{n+1}=P_{T_{n}} \bigl(x_{n}-\lambda F(y_{n})\bigr), $$
(7)
where
\(T_{n}=\{x\in X|\langle x_{n}-\lambda F(x_{n})-y_{n},x-y_{n} \rangle \leq 0\}\) and
\(\lambda \in (0,\frac{1}{L})\). Cai et al. [
23] suggested the following method:
$$ \textstyle\begin{cases} x_{0}\in X,\\ y_{n}=P_{C}(Jx_{n}-\lambda _{n} F(x_{n})), \\ T_{n}=\{x \in X|\langle Jx_{n}-\lambda _{n} F(x_{n})-Jy_{n},x-y_{n} \rangle \leq 0\},\\ z_{n}=P_{T_{n}}(Jx_{n}-\lambda _{n} F(y_{n})),\\ x_{n+1}=J ^{-1}(\alpha _{n}Jx_{0}+(1-\alpha _{n})Jz_{n}), \end{cases} $$
(8)
where
J is the normalized duality mapping of
X into
\(X^{*}\),
\(\lambda _{n} \in (0,\frac{1}{L})\),
\({\alpha _{n}}\subset (0,1)\),
\(\alpha _{n}\rightarrow 0\) and
\(\sum_{n=1}^{\infty }\alpha _{n}=+\infty \). They proved that the sequence
\(\{x_{n}\}\) generated by (
8) converges strongly to
\(P_{S}Jx_{0}\).
The main drawback of algorithms (
7) and (
8) is a requirement to know the Lipschitz constant or to know some estimation of it. Yekini and Olaniyi [
7] proposed the following subgradient extragradient method:
$$ \textstyle\begin{cases} \text{Given } \rho \in (0, 1), \mu \in (0, 1), \\ y_{n}=P_{C}(x_{n}-\lambda _{n} F(x_{n})), \text{where } \lambda _{n}= \rho ^{l_{n}} \text{ and } l_{n} \text{ is the smallest nonnegative inter } l \\ \text{such that } \lambda _{n} \Vert F(x_{n})-F(y_{n}) \Vert \leq \mu \Vert x_{n}-y_{n} \Vert , \\ z_{n}=P_{T_{n}}(x_{n}-\lambda _{n} F(y_{n})), \text{where } T_{n}=\{x\in H|\langle x_{n}-\lambda _{n} F(x_{n})-y _{n},x-y_{n} \rangle \leq 0\}, \\ x_{n+1}=\alpha _{n}f(x_{n})+(1-\alpha _{n})z_{n}, \text{where } f:H \rightarrow H \text{ is a contraction mapping}. \end{cases} $$
(9)
The algorithm of (
9) does not require one to know the Lipschitz constant, but the method may involve computation of additional projections.
In [
32], Alber introduced a functional
\(V(x^{*},y): X^{*} \times X \longrightarrow R\) by
$$ V\bigl(x^{*},y\bigr)= \bigl\Vert x^{*} \bigr\Vert ^{2}_{*}-2 \bigl\langle x^{*},y \bigr\rangle + \Vert y \Vert ^{2} . $$
(10)
Clearly,
$$ V\bigl(x^{*},y\bigr)\geq \bigl( \bigl\Vert x^{*} \bigr\Vert _{*}- \Vert y \Vert \bigr)^{2} . $$
The operator
\(P_{C}: X^{*}\longrightarrow C\subseteq X\) is said to be
generalized projection operator if it associates to an arbitrary fixed point
\(x^{*}\in X^{*}\), the solution to the minimization problem
$$ V\bigl(x^{*}, \tilde{x^{*}}\bigr)= \inf_{y\in C} V\bigl(x^{*}, y\bigr), $$
where
\(\tilde{x^{*}}=P_{C}x^{*}\in C \subset X\) is called a
generalized projection of the point
\(x^{*}\). For more results about
\(P_{C}\), see [
32]. The next lemma can describe the properties of
\(P_{C}\).
In [
32], Alber also introduced the
Lyapunov functional
\(\varphi : X \times X\longrightarrow R\) by
$$ \varphi (x,y)= \Vert x \Vert ^{2}-2 \langle Jx,y \rangle + \Vert y \Vert ^{2}, \quad \forall x,y\in X. $$
Then, combining (
10), we obtain
\(V(x^{*},y)=\varphi (J^{-1}x^{*},y)\), for all
\(x^{*}\in X^{*}\),
\(y\in X\). Moreover, we have the following lemma (see [
33]).
The following two lemmas which will be useful to our subsequent convergence analysis, and they are stated and proved in [
34,
35].
The following result for proving our main result relies on certain estimate and other classical properties of the iterates which are given in [
23].
3 Main results
In this section, we introduce a new iterative algorithms for solving monotone variational inequality problems in Banach spaces. In order to present the method and establish its convergence, we make the following assumption.
Now, we discuss the strong convergence using the following algorithm for solving monotone variational inequality. Our algorithms are of the following forms.
We prove the strong convergence theorem for Algorithm
A. Firstly, we give the following theorem, which plays a crucial role in the proof of the main theorem.
The following lemma plays a crucial role in the proof of Theorem
2.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.