Skip to main content
Erschienen in: Journal of Inequalities and Applications 1/2015

Open Access 01.12.2015 | Research

A projection descent method for solving variational inequalities

verfasst von: Abdellah Bnouhachem, Qamrul Hasan Ansari, Ching-Feng Wen

Erschienen in: Journal of Inequalities and Applications | Ausgabe 1/2015

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we propose a descent direction method for solving variational inequalities. A new iterate is obtained by searching the optimal step size along a new descent direction which is obtained by the linear combination of two descent directions. Under suitable conditions, the global convergence of the proposed method is studied. Two numerical experiments are presented to illustrate the efficiency of the proposed method.
Hinweise

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have equal contribution. All authors read and approved the final manuscript.

1 Introduction

The theory of variational inequalities is a well-established subject in nonlinear analysis and optimization. It was started during early sixties by the pioneering work of Fichera [1] and Stampacchia [2] (see also [3]). Since then it has been extended and studied in several directions. One of the most important aspects of the theory of variational inequalities is the solution method. Several solution methods were proposed and analyzed in the literature (see, for example, [410] and the references therein). The fixed point theory plays an important role in developing various kinds of algorithms for solving variational inequalities. By using the projection operator technique, one can easily establish the equivalence between variational inequalities and fixed point problems. This alternative equivalent formulation provides an elegant solution method, known as projection gradient method, for solving variational inequalities. The convergence of this method requires the strong monotonicity and Lipschitz continuity of the underlying operator. Many applications do not have these strong conditions. As a result, the projection gradient method is not applicable for such problems. Korpelevich [11] modified the projection gradient method to overcome these difficulties and introduced the so-called extragradient method. It generates the iterates according to the following recursion:
$$ u^{k+1}=P_{K}\bigl[u^{k}-\rho T\bigl( \bar{u}^{k}\bigr)\bigr], $$
where
$$ \bar{u}^{k}=P_{K}\bigl[u^{k}-\rho T \bigl({u}^{k}\bigr)\bigr], $$
and \(\rho>0\) is a fixed parameter. This method overcomes the difficulties arising in the project gradient method by performing an additional forward step and a projection at each iteration according to double projection. It was proved in [11] that the extragradient method is globally convergent if T is monotone and Lipschitz continuous on K and \(0<\rho<1/L\), where L is the Lipschitz constant of T. When the operator T is not Lipschitz continuous, or it is Lipschitz but the constant L is not known, the fixed parameter ρ must be replaced by the step size computed through an Armijo-like line search procedure with a new projection needed for each trial point (see, e.g., [11, 12]), and this can be computationally very expensive. To overcome these difficulties, several modified projection and extragradient-type methods [1222] have been suggested and developed for solving variational inequalities. It was shown in [14, 16] that three-step method performs better than two-step and one-step methods for solving variational inequalities.
The aim of this paper is to develop an algorithm, inspired by Fu [16], for solving variational inequalities. More precisely, in the first method, a new iterate is obtained by searching the optimal step size along the integrated descent direction from the linear combination of two descent directions, and another optimal step length is employed to reach substantial progress in each iteration for the second method. It is proved theoretically that the lower-bound of the progress is greater when obtained by the second method than by the first one. Under certain conditions, the global convergence of the proposed methods is proved. Our results can be viewed as significant extensions of the previously known results.

2 Preliminaries

Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(T : K \to \mathbb{R}^{n}\) be an operator. A classical variational inequality problem, denoted by \(\operatorname{VI}(T,K)\), is to find a vector \(u^{*}\in K\) such that
$$ \bigl\langle T\bigl(u^{*}\bigr), v-u^{*} \bigr\rangle \geq0, \quad \forall v \in K. $$
(1)
It is worth to mention that the solution set \(S^{*}\) of \(\operatorname{VI}(T,K)\) is nonempty if T is continuous and K is compact.
It is well known that if K is a closed convex cone, then \(\operatorname {VI}(T,K)\) is equivalent to the nonlinear complementarity problem of finding \(u^{*} \in K\) such that
$$ T\bigl(u^{*}\bigr) \in K^{*}\quad \mbox{and} \quad \bigl\langle T\bigl(u^{*}\bigr), u \bigr\rangle = 0, $$
(2)
where \(K^{*} := \{ y \in\mathbb{R}^{n}: \langle y,x \rangle\geq0 \mbox{ for all } x \in K \}\). For further details on variational inequalities and complementarity problems, we refer to [48] and the references therein.
In what follows, we always assume that the underlying operator T is continuous and pseudomonotone, that is,
$$ \bigl\langle T(u), v-u \bigr\rangle \ge0 \quad \Rightarrow \quad \bigl\langle T(v), v-u \bigr\rangle \geq0,\quad \forall u, v \in K, $$
and the solution set of problem (1), denoted by \(S^{*}\), is nonempty.
The following results will be used in the sequel.
Lemma 1
[23]
Let \(K \subset\mathbb{R}^{n}\) be a nonempty closed convex set and \(P_{K}(\cdot)\) denote the projection on K under the Euclidean norm, that is, \(P_{K}(z)= \arg\min\{ \|z-x\| : x \in K\}\). Then the following statements hold:
(a)
\(\langle z-P_{K}(z),P_{K}(z)-v\rangle\geq 0\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
 
(b)
\({\|P_{K}(z)-v\|}^{2} \le{\|z-v\|}^{2}-{\| z-P_{K}(z)\|}^{2}\), \(\forall z \in\mathbb{R}^{n}\), \(v \in K\).
 
(c)
\(\|P_{K}(w)-P_{K}(v)\|^{2}\leq\langle w-v,P_{K}(w)-P_{K}(v)\rangle\), \(\forall v,w \in\mathbb{R}^{n}\).
 
Lemma 2
[7]
\(u^{*}\) is a solution of problem (1) if and only if
$$ u^{*}=P_{K}\bigl[u^{*}-\rho T\bigl(u^{*}\bigr)\bigr], \quad \textit{where } \rho> 0. $$
(3)
From Lemma 2, it is clear that u is a solution of (1) if and only if u is a zero point of the function
$$ e(u,\rho):=u-P_{K}\bigl[u-\rho T(u)\bigr]. $$
The next lemma shows that \(\|e(u,\rho)\|\) is a non-decreasing function, while \(\frac{\|e(u,\rho)\|}{\rho}\) is a non-increasing one with respect to ρ.
Lemma 3
[2426]
For all \(u \in\mathbb{R}^{n}\) and \(\rho' > \rho> 0\),
$$ \bigl\Vert e\bigl(u, \rho'\bigr) \bigr\Vert \ge \bigl\Vert e(u,\rho)\bigr\Vert $$
(4)
and
$$ \frac{\| e(u,\rho') \|}{\rho'} \le\frac{\| e(u,\rho) \|}{\rho}. $$
(5)

3 Algorithm and convergence results

In this section, we suggest and analyze two new methods for solving variational inequalities (1). For given \(u^{k}\in K\) and \(\rho_{k}>0\), each iteration of the first method consists of three steps, the first step offers \(\tilde{u}^{k}\), the second step makes \(\bar{u}^{k}\) and the third step produces the new iterate \(u^{k+1}\).
Algorithm 1
Step 1.
Given \(u^{0} \in K\), \(\epsilon>0\), \(\rho_{0}=1\), \(\nu>1\), \(\mu\in(0, \sqrt{2})\), \(\sigma\in(0,1)\), \(\zeta\in(0,1)\), \(\eta _{1}\in(0,\zeta)\), \(\eta_{2}\in(\zeta, \nu)\) and let \(k=0\).
 
Step 2.
If \(\|e(u^{k},\rho_{k} )\| \leq\epsilon\), then stop. Otherwise, go to Step 3.
 
Step 3.
(1)
For a given \(u^{k}\in K\), calculate the two predictors
$$\begin{aligned}& \tilde{u}^{k} = P_{K} \bigl[u^{k} - \rho_{k} T\bigl(u^{k}\bigr)\bigr], \end{aligned}$$
(6a)
$$\begin{aligned}& \bar{u}^{k} = P_{K}\bigl[ \tilde{u}^{k} - \rho_{k} T\bigl( \tilde{u}^{k} \bigr)\bigr]. \end{aligned}$$
(6b)
 
(2)
If \(\|e(\tilde{u}^{k},\rho_{k} )\|\leq\epsilon\), then stop. Otherwise, continue.
 
(3)
If \(\rho_{k}\) satisfies both
$$ r_{1}:=\frac{|\rho_{k}[\langle \tilde{u}^{k}-\bar{u}^{k},T(u^{k})-T(\tilde{u}^{k})\rangle-\langle u^{k}-\bar{u}^{k},T(\tilde{u}^{k})-T(\bar{u}^{k})\rangle]|}{\|\tilde {u}^{k}-\bar{u}^{k}\|^{2}}\leq \mu^{2} $$
(7)
and
$$ r_{2}:=\frac{\|\rho_{k}(T(\tilde{u}^{k})-T(\bar{u}^{k}))\|}{\|\tilde{u}^{k} - \bar{u}^{k}\|} \leq\nu, $$
(8)
then go to Step 4; otherwise, continue.
 
(4)
Perform an Armijo-like line search via reducing \(\rho_{k}\),
$$ \rho_{k} := \rho_{k} *\frac{\sigma}{\max(r_{1}, 1)} $$
(9)
and go to Step 3.
 
 
Step 4.
Take the new iteration \(u^{k+1}\) by setting
$$ u^{k+1}(\alpha_{k})= P_{K} \bigl[u^{k} - \alpha_{k}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr], $$
(10)
where
$$ d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)=\beta_{1}d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)+\beta _{2} \rho_{k}F\bigl(\bar{u}^{k}\bigr) $$
(11)
and
$$ d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr) : =\bigl(\tilde{u}^{k} - \bar{u}^{k} \bigr) - \rho_{k}\bigl(T\bigl( \tilde{u}^{k} \bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr). $$
(12)
 
Step 5.
Adaptive rule of choosing a suitable \(\rho_{k+1}\) as the start prediction step size for the next iteration.
(1)
Prepare a proper \(\rho_{k+1}\),
$$ \rho_{k+1} :=\left \{ \begin{array}{l@{\quad}l} {\rho_{k}*\zeta/r_{2}} & \mbox{if } r_{2} \le\eta_{1}, \\ \rho_{k}*\zeta/r_{2} & \mbox{if } r_{2} \ge\eta_{2}, \\ \rho_{k} & \mbox{otherwise}. \end{array} \right . $$
 
(2)
Return to Step 2, with k replaced by \(k+1\).
 
 
Remark 1
(a) The proposed method can be viewed as a refinement and improvement of the method of He et al. [27] by performing an additional projection step at each iteration and another optimal step length is employed to reach substantial progress in each iteration.
(b) If \(\beta_{1}=0\) and \(\beta_{2}=1\), we obtain the method proposed in [16].
(c) If \(\beta_{1}=1\) and \(\beta_{2}=0\), we obtain a descent method, the new iterate is obtained along a descent direction \(d_{1}(\tilde {u}^{k},\bar{u}^{k})\).
Remark 2
In (9), if \(\sigma>\max(r_{1}, 1)\), it indicates that \(\rho _{k}\) will be too large for the next iteration and will increase the number of Armijo-like line searches. So, we choose \(\rho_{k}\) for the next iteration to be only modestly smaller than \(\rho_{k}\) to avoid expensive implementations in the next iteration.
We now consider the criteria for \(\alpha_{k}\), which ensures that \(u^{k+1}(\alpha_{k})\) is closer to the solution set than \(u^{k}\). For this purpose, we define
$$ \varTheta _{k}(\alpha_{k}) := \bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u^{k+1}( \alpha_{k})-u^{*}\bigr\Vert ^{2}. $$
(13)
Theorem 1
Let \(u^{*}\in S^{*}\). Then we have
$$ \varTheta (\alpha_{k})\geq \varPhi (\alpha_{k})\geq \varPsi (\alpha_{k}), $$
(14)
where
$$ \varPhi (\alpha_{k})=\bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2 \alpha_{k}\bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k}, d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\rangle , $$
(15)
and
$$ \varPsi (\alpha_{k})=2\alpha_{k}(\beta_{1}+ \beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle -\alpha_{k}^{2}(\beta_{1}+ \beta_{2})^{2}\bigl\Vert d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2}. $$
Proof
Since \(u^{*}\in K\), setting \(v=u^{*}\) and \(z=u^{k}-\alpha_{k} d(\tilde {u}^{k},\bar{u}^{k})\) in (b), we have
$$ \bigl\Vert u^{k+1}(\alpha_{k})-u^{*}\bigr\Vert ^{2}\leq\bigl\Vert u^{k}-u^{*}-\alpha_{k} d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2}- \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})- \alpha_{k} d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\Vert ^{2}. $$
Using the definition of \(\varTheta (\alpha_{k})\), we get
$$\begin{aligned} \varTheta (\alpha_{k}) \geq& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2} +2\alpha_{k}\bigl\langle u^{k}-u^{*}, d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{} -2\alpha_{k}\bigl\langle u^{k}-u^{k+1}( \alpha_{k}),d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr) \bigr\rangle \\ =& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\bigl\langle u^{k+1}(\alpha _{k})-u^{*}, d\bigl(\tilde{u}^{k},\bar{u}^{k}\bigr) \bigr\rangle . \end{aligned}$$
(16)
For any solution \(u^{*}\in S^{*}\) of problem (1), we have
$$ \bigl\langle \rho_{k} T\bigl(u^{*}\bigr),\bar{u}^{k}-u^{*} \bigr\rangle \geq0. $$
By the pseudomonotonicity of T, we obtain
$$ \bigl\langle \rho_{k} T\bigl(\bar{u}^{k}\bigr), \bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(17)
Substituting \(z=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) and \(v=u^{*}\) into (a), we get
$$ \bigl\langle \tilde{u}^{k}-\rho_{k} T\bigl( \tilde{u}^{k}\bigr)-\bar{u}^{k},\bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(18)
Adding (17) and (18), and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have
$$ \bigl\langle d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr),\bar{u}^{k}-u^{*}\bigr\rangle \geq0. $$
(19)
Multiplying (19) by \(2\alpha_{k}\beta_{1}\) and (17) by \(2\alpha_{k}\beta_{2}\), then adding the resultants with (16), we obtain
$$\begin{aligned} \varTheta (\alpha_{k}) \geq& \bigl\Vert u^{k}-u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\bigl\langle u^{k+1}(\alpha _{k})-\bar{u}^{k}, d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle \\ =&\varPhi (\alpha_{k}) \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}\beta_{1}\bigl\langle u^{k+1}(\alpha_{k})-\bar{u}^{k}, d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{} +2\alpha_{k}\beta_{2}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, \rho_{k} T\bigl( \bar{u}^{k}\bigr)\bigr\rangle . \end{aligned}$$
(20)
Note that \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (a) with \(v=u^{k+1}\), to obtain
$$ \bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k},\tilde{u}^{k}-\rho_{k} T\bigl(\tilde {u}^{k}\bigr)-\bar{u}^{k}\bigr\rangle \leq0. $$
(21)
Multiplying (21) by \(2\alpha_{k}\beta_{2}\) and then adding the resultant to (20) and using the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we have
$$\begin{aligned} \varPhi (\alpha_{k}) \geq& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2} +2 \alpha_{k}\beta_{1}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{}+2\alpha_{k}\beta_{2}\bigl\langle u^{k+1}( \alpha_{k})-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})\bigr\Vert ^{2}+2\alpha_{k}(\beta_{1}+ \beta_{2}) \bigl\langle u^{k+1}(\alpha_{k})- \bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \\ =& \bigl\Vert u^{k}- u^{k+1}(\alpha_{k})- \alpha_{k}(\beta_{1}+\beta_{2})d_{1} \bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ &{}+2\alpha_{k}(\beta_{1}+\beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ &{}-\alpha_{k}^{2}(\beta_{1}+ \beta_{2})^{2}\bigl\Vert d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ \geq&2\alpha_{k}(\beta_{1}+\beta_{2})\bigl\langle u^{k}-\bar{u}^{k}, d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\rangle -\alpha_{k}^{2}( \beta_{1}+\beta_{2})^{2}\bigl\Vert d_{1}\bigl(\tilde {u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ =&\varPsi (\alpha_{k}), \end{aligned}$$
and the theorem is proved. □
Proposition 1
Assume that T is continuously differentiable, then we have
(i)
\(\varPhi ^{\prime}(\alpha)=2 \langle u^{k+1}(\alpha)- \bar {u}^{k},d(\tilde{u}^{k},\bar{u}^{k})\rangle\);
 
(ii)
\(\varPhi ^{\prime}(\alpha)\) is a non-increasing function with respect to \(\alpha\geq0\), and hence, \(\varPhi (\alpha)\) is concave.
 
Proof
For given \(u^{k}, \tilde{u}^{k}, \bar{u}^{k} \in K\), let
$$ h(\alpha,y)=\bigl\Vert y-\bigl[u^{k}-\alpha d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr]\bigr\Vert ^{2}-\alpha^{2}\bigl\Vert d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\Vert ^{2}-2\alpha\bigl\langle \bar{u}^{k}- {u}^{k},d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle . $$
(22)
It easy to see that the solution of the following problem
$$ \min_{y}\bigl\{ h(\alpha,y) : y\in K\bigr\} $$
is \(y^{*} = P_{K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]\). By substituting \(y^{*}\) into (22) and simplifying it, we obtain
$$ \varPhi (\alpha)=h(\alpha,y)|_{y = P_{ K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]} = \min_{y}\bigl\{ h( \alpha,y) : y \in K \bigr\} . $$
\(\varPhi (\alpha)\) is differentiable and its derivative is given by
$$\begin{aligned} \varPhi ^{\prime}(\alpha) =&{\biggl.\frac{\partial h(\alpha,y)}{\partial \alpha}\biggr|_{ y = P_{ K}[u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})]}} \\ =&2\bigl\langle u^{k+1}(\alpha)-u^{k} +\alpha d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr),d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle -2\alpha\bigl\Vert d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\Vert ^{2} \\ &{}-2 \bigl\langle \bar{u}^{k}-u^{k},d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ =&2 \bigl\langle {u}^{k+1}(\alpha)- \bar{u}^{k},d\bigl( \tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle , \end{aligned}$$
and hence (i) is proved. We now establish the proof of the second assertion. Let \(\bar{\alpha} > \alpha\geq0\). We show that
$$ \varPhi ^{\prime}(\bar{\alpha})\leq \varPhi ^{\prime}(\alpha), $$
that is,
$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}( \alpha),d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \leq0. $$
(23)
By setting \(z := u^{k}-\bar{\alpha} d(\tilde{u}^{k},\bar{u}^{k})\), \(v := u^{k+1}(\alpha)\) and \(z := u^{k}-\alpha d(\tilde{u}^{k},\bar{u}^{k})\), \(v :=u^{k+1}(\bar{\alpha})\) in (a), respectively, we get
$$ \bigl\langle u^{k}-\bar{\alpha}d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)-u^{k+1}(\bar { \alpha}), u^{k+1}(\alpha)-u^{k+1}(\bar{\alpha})\bigr\rangle \leq0 $$
(24)
and
$$ \bigl\langle u^{k}-{\alpha}d\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)-u^{k+1}({\alpha}), u^{k+1}(\bar{ \alpha})-u^{k+1}(\alpha)\bigr\rangle \leq0. $$
(25)
By adding (24) and (25), we obtain
$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha),u^{k+1}( \bar{\alpha })-u^{k+1}(\alpha) +(\bar{\alpha}-\alpha)d\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \leq0, $$
that is,
$$ \bigl\Vert u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha)\bigr\Vert ^{2}+(\bar{\alpha}-\alpha) \bigl\langle u^{k+1}(\bar{ \alpha})-u^{k+1}(\alpha), d\bigl(\tilde{u}^{k},\bar {u}^{k}\bigr)\bigr\rangle \leq0. $$
It follows that
$$ \bigl\langle u^{k+1}(\bar{\alpha})-u^{k+1}(\alpha), d\bigl( \tilde{u}^{k},\bar {u}^{k}\bigr) \bigr\rangle \leq \frac{-1}{(\bar{\alpha}-\alpha)}\bigl\Vert u^{k+1}(\bar{\alpha}) - u^{k+1}( \alpha)\bigr\Vert ^{2} \leq0. $$
We obtain inequality (23) and complete the proof. □
Now for the same kth approximate solution \(u^{k}\), let
$$ \alpha^{*}_{k_{1}}=\arg \max_{\alpha}\bigl\{ \varPsi (\alpha ) \mid \alpha> 0\bigr\} $$
and
$$ \alpha^{*}_{k_{2}}=\arg \max_{\alpha}\bigl\{ \varPhi (\alpha ) \mid \alpha> 0\bigr\} . $$
Since \(\varPsi (\alpha)\) is a quadratic function of α and it reaches its maximum at
$$ \alpha^{*}_{k_{1}}=\frac{\langle u^{k}-\bar{u}^{k}, d_{1}(\tilde{u}^{k},\bar {u}^{k})\rangle}{ (\beta_{1}+\beta_{2})\|d_{1}(\tilde{u}^{k},\bar{u}^{k})\|^{2}} $$
(26)
and
$$ \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr)=\alpha^{*}_{k_{1}}( \beta_{1}+\beta_{2})\bigl\langle u^{k}- \bar{u}^{k}, d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\rangle . $$
(27)
In order to make \(\alpha^{*}_{k_{2}}\) to be obtained more easily, we approximately compute \(\alpha^{*}_{k_{2}}\) by solving the following simple optimization problem:
$$ \alpha^{*}_{k_{2}} = \arg \max_{\alpha} \bigl\{ \varPhi (\alpha )\mid 0< \alpha\leq m_{1}\alpha^{*}_{k_{1}} \bigr\} , $$
(28)
where \(m_{1} \geq1\).
Based on Theorem 1 and Proposition 1, the following result can be proved easily.
Proposition 2
Let \(\alpha^{*}_{k_{1}}\) and \(\alpha^{*}_{k_{2}}\) be defined by (26) and (28), respectively, and let T be pseudomonotone and continuously differentiable. Then
(i)
\(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}})-u^{*}\|^{2}\geq \varPhi (\alpha^{*}_{k_{2}})\);
 
(ii)
\(\| u^{k} - u^{*} \|^{2}-\| u^{k+1}(\alpha^{*}_{k_{1}}) - u^{*} \| ^{2}\geq \varPsi (\alpha^{*}_{k_{1}})\);
 
(iii)
\(\varPhi (\alpha^{*}_{k_{2}})\geq \varPsi (\alpha^{*}_{k_{1}})\);
 
(iv)
if \(\varPhi ^{\prime}(\alpha^{*}_{k_{2}})=0\), then \(\|u^{k}-u^{*}\|^{2}-\| u^{k+1}(\alpha^{*}_{k_{2}}) -u^{*}\|^{2}\geq\|u^{k}- u^{k+1}(\alpha^{*}_{k_{2}}) \|^{2}\).
 
Remark 3
Let
$$ u_{\mathrm{I}}^{k+1}\bigl(\alpha^{*}_{k_{1}}\bigr) =P_{K}\bigl[u^{k}-\alpha^{*}_{k_{1}} d \bigl(u^{k},\rho_{k}\bigr)\bigr] $$
and
$$ u_{\mathrm{II}}^{k+1}\bigl(\alpha^{*}_{k_{2}}\bigr) =P_{K}\bigl[u^{k}-\alpha^{*}_{k_{2}} d \bigl(u^{k},\rho_{k}\bigr)\bigr] $$
represent the new iterates generated by the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and \(\alpha_{k}=\alpha^{*}_{k_{2}}\), respectively. Let
$$ \varTheta _{\mathrm{I}}\bigl(\alpha^{*}_{k_{1}}\bigr)=\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u_{\mathrm{I}}^{k+1}\bigl(\alpha^{*}_{k_{1}}\bigr)-u^{*}\bigr\Vert ^{2} $$
and
$$ \varTheta _{\mathrm{II}}\bigl(\alpha^{*}_{k_{2}}\bigr)=\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}-\bigl\Vert u_{\mathrm{II}}^{k+1}\bigl(\alpha^{*}_{k_{2}}\bigr)-u^{*}\bigr\Vert ^{2} $$
measure the progresses made by the new iterates, respectively. By using Proposition 2, it is easy to show that
$$\begin{aligned}& \varTheta _{\mathrm{I}}\bigl(\alpha^{*}_{k_{1}}\bigr)\geq \varPsi \bigl( \alpha^{*}_{k_{1}}\bigr), \\& \varTheta _{\mathrm{II}}\bigl(\alpha^{*}_{k_{2}}\bigr)\geq \varPhi \bigl( \alpha^{*}_{k_{2}}\bigr) \end{aligned}$$
and
$$ \varPhi \bigl(\alpha^{*}_{k_{2}}\bigr)\geq \varPsi \bigl(\alpha^{*}_{k_{1}} \bigr). $$
The above inequalities show that the proposed method with \(\alpha _{k}=\alpha^{*}_{k_{2}}\) is expected to make more progress than the proposed method with \(\alpha_{k}=\alpha ^{*}_{k_{1}}\) at each iteration, and so it explains theoretically that the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) outperforms the proposed method with \(\alpha_{k}=\alpha^{*}_{k_{1}}\).
In the next theorem, we show that \(\alpha^{*}_{k_{1}}\) and \(\varPsi (\alpha^{*}_{k_{1}})\) are lower bounded away from zero, and it is one of the keys to prove the global convergence results.
Theorem 2
Let \(u^{*}\) be a solution of problem (1). For given \(u^{k} \in K\), let \(\tilde{u}^{k}\), \(\bar{u}^{k}\) be the predictors produced by (6a) and (6b). Then
$$ \alpha^{*}_{k_{1}}\geq\frac{2-\mu^{2}}{(\beta_{1}+\beta_{2})(1+\nu)^{2}} $$
(29)
and
$$ \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr)\geq\frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$
(30)
Proof
Note that \(\tilde{u}^{k}= P_{K} [u^{k}-\rho_{k} T(u^{k})]\), \(\bar{u}^{k}= P_{K} [\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})]\). We can apply (c) with \(v=u^{k}-\rho_{k} T(u^{k})\), \(w=\tilde{u}^{k}-\rho_{k} T(\tilde{u}^{k})\) to obtain
$$ \bigl\langle u^{k}-\rho_{k} T\bigl(u^{k}\bigr)- \bigl(\tilde{u}^{k}-\rho_{k} T\bigl(\tilde{u}^{k} \bigr)\bigr), \tilde{u}^{k}-\bar{u}^{k}\bigr\rangle \geq\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$
By some manipulations, we obtain
$$ \bigl\langle u^{k}-\tilde{u}^{k},\tilde{u}^{k}- \bar{u}^{k} \bigr\rangle \geq \bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert ^{2}+\rho_{k} \bigl\langle \tilde{u}^{k}-\bar {u}^{k},T\bigl(u^{k}\bigr)-T \bigl(\tilde{u}^{k}\bigr)\bigr\rangle . $$
Then we have
$$\begin{aligned} \bigl\langle u^{k}-\tilde{u}^{k},d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle =& \bigl\langle u^{k}-\tilde{u}^{k}, \tilde{u}^{k}- \bar{u}^{k}\bigr\rangle -\rho_{k}\bigl\langle u^{k}- \tilde {u}^{k},T\bigl(\tilde{u}^{k}\bigr)-T\bigl( \bar{u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}+\rho_{k}\bigl\langle \tilde{u}^{k}-\bar {u}^{k},T\bigl(u^{k}\bigr)-T\bigl(\tilde{u}^{k} \bigr)\bigr\rangle \\ &{}-\rho_{k}\bigl\langle u^{k}-\tilde{u}^{k},T \bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr\rangle . \end{aligned}$$
(31)
By using (31), (7) and the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\), we get
$$\begin{aligned} \bigl\langle u^{k}-\bar{u}^{k},d_{1} \bigl(\tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle =& \bigl\langle u^{k}-\tilde{u}^{k},d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle +\bigl\langle \tilde{u}^{k}-\bar{u}^{k},d_{1}\bigl( \tilde{u}^{k},\bar{u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}+\rho_{k}\bigl\langle \tilde{u}^{k} -\bar {u}^{k},T\bigl(u^{k}\bigr)-T\bigl(\tilde{u}^{k} \bigr)\bigr\rangle \\ &{}-\rho_{k}\bigl\langle u^{k}-\tilde{u}^{k},T \bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar{u}^{k}\bigr)\bigr\rangle +\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert ^{2} \\ &{}-\rho_{k}\bigl\langle \tilde{u}^{k} - \bar{u}^{k},T\bigl(\tilde{u}^{k}\bigr)-T\bigl(\bar {u}^{k}\bigr)\bigr\rangle \\ \geq&\bigl({2-\mu^{2}}\bigr)\bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$
(32)
Recalling the definition of \(d_{1}(\tilde{u}^{k},\bar{u}^{k})\) (see (12)) and applying criterion (8), it is easy to see that
$$ \bigl\Vert d_{1}\bigl(\tilde{u}^{k}, \bar{u}^{k}\bigr)\bigr\Vert ^{2}\leq\bigl(\bigl\Vert \tilde{u}^{k}-\bar{u}^{k}\bigr\Vert +\bigl\Vert \rho_{k}\bigl(T\bigl(\tilde{u}^{k}\bigr)-T\bigl( \bar{u}^{k}\bigr)\bigr)\bigr\Vert \bigr)^{2}\leq(1+ \nu)^{2}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. $$
(33)
Moreover, by using (32) together with (33), we get
$$ \alpha^{*}_{k_{1}}=\frac{ \langle u^{k}-\bar{u}^{k},d_{1}(\tilde{u}^{k},\bar {u}^{k})\rangle}{(\beta_{1}+\beta_{2}) \|d_{1}(\tilde{u}^{k},\bar{u}^{k})\|^{2}} \geq \frac{2-\mu^{2}}{(\beta_{1}+\beta_{2})(1+\nu)^{2}}>0,\quad \mu\in (0,\sqrt{2}). $$
(34)
By substituting (34) in (27), we get the assertion of this theorem and the proof is completed. □
From the computational point of view, a relaxation factor \(\gamma\in (0,2)\) is preferable in the correction. We are now in the position to prove the contractive property of the iterative sequence.
Lemma 4
Let \(u^{*}\) be a solution of problem (1) and let \(u^{k+1}(\gamma\alpha_{k})\) be the sequence generated by Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) or \(\alpha_{k}=\alpha^{*}_{k_{2}}\). Then \(u^{k}\) is bounded and
$$ \bigl\Vert u^{k+1}(\gamma\alpha_{k}) - u^{*} \bigr\Vert ^{2}\leq\bigl\Vert u^{k} -u^{*}\bigr\Vert ^{2}-c\bigl\Vert u^{k}-\tilde{u}^{k}\bigr\Vert ^{2}, $$
(35)
where
$$ c := \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}>0. $$
Proof
If \(\alpha_{k} = \alpha^{*}_{k_{2}}\). It follows from (14), (29) and (30) that
$$\begin{aligned} \bigl\Vert u^{k+1}\bigl(\gamma\alpha^{*}_{k_{2}} \bigr)-u^{*}\bigr\Vert ^{2} \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPhi \bigl(\alpha^{*}_{k_{2}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPsi \bigl( \alpha^{*}_{k_{1}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$
If \(\alpha_{k}=\alpha^{*}_{k_{1}}\), then we have
$$\begin{aligned} \bigl\Vert u^{k+1}\bigl(\gamma\alpha^{*}_{k_{1}} \bigr)-u^{*}\bigr\Vert ^{2} \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \varPsi \bigl(\alpha^{*}_{k_{1}}\bigr) \\ \leq&\bigl\Vert u^{k}-u^{*}\bigr\Vert ^{2}- \frac{(2-\mu^{2})^{2}}{(1+\nu)^{2}}\bigl\Vert \tilde {u}^{k}-\bar{u}^{k}\bigr\Vert ^{2}. \end{aligned}$$
Since \(\mu\in(0,\sqrt{2})\), we have
$$ \bigl\Vert u^{k+1}(\gamma\alpha_{k})-u^{*}\bigr\Vert \leq \bigl\Vert u^{k}-u^{*}\bigr\Vert \leq\cdots\leq\bigl\Vert u^{0}-u^{*}\bigr\Vert . $$
From the above inequality, it is easy to verify that the sequence \(u^{k}\) is bounded. □
We now present the convergence result of Algorithm 1.
Theorem 3
If \(\inf_{k=0}^{\infty} \rho_{k} := \rho> 0\), then any cluster point of the sequence \(\{\tilde{u}^{k}\}\) generated by Algorithm 1 is a solution of problem (1).
Proof
It follows from (35) that
$$ \sum_{k=0}^{\infty}c\bigl\Vert u^{k}-\tilde{u}^{k}\bigr\Vert ^{2}< +\infty, $$
which means that
$$ \lim_{k\rightarrow\infty}\bigl\Vert \tilde{u}^{k}- \bar{u}^{k}\bigr\Vert =0. $$
Since the sequence \(u^{k}\) is bounded, \(\{\tilde{u}^{k}\}\) is too, and it has at least a cluster point. Let \(u^{\infty}\) be a cluster point of \(\{\tilde{u}^{k}\}\) and the subsequence \(\{\tilde{u}^{k_{j}}\}\) converges to \(u^{\infty}\). By the continuity of \(e(u,\rho)\) and inequality (4), it follows that
$$ \bigl\Vert e\bigl(u^{\infty},\rho\bigr)\bigr\Vert =\lim _{k_{j}\rightarrow \infty}\bigl\Vert e\bigl(\tilde{u}^{k_{j}}, \rho\bigr) \bigr\Vert \leq\lim_{k_{j}\rightarrow \infty}\bigl\Vert e\bigl( \tilde{u}^{k_{j}}, \rho_{k_{j}}\bigr)\bigr\Vert =\lim _{k_{j}\rightarrow \infty}\bigl\Vert \tilde{u}^{k_{j}}- \bar{u}^{k_{j}}\bigr\Vert =0. $$
This means that \(u^{\infty}\) is a solution of problem (1).
We now prove that the sequence \(\{u^{k}\}\) has exactly one cluster point. Assume that \(\tilde{u}\) is another cluster point and satisfies
$$ \delta:=\bigl\Vert \tilde{u} - u^{\infty} \bigr\Vert >0. $$
Since \(u^{\infty} \) is a cluster point of the sequence \(\{u^{k}\}\), there is \(k_{0}>0\) such that
$$ \bigl\Vert u^{k_{0}} - u^{\infty} \bigr\Vert \le \frac{\delta}{2}. $$
On the other hand, since \(u^{\infty}\in S^{*}\) and from (35), we have
$$ \bigl\Vert u^{k} -u^{\infty} \bigr\Vert \le\bigl\Vert u^{k_{0}} - u^{\infty}\bigr\Vert \quad \mbox{for all } k\ge k_{0}, $$
it follows that
$$ \bigl\Vert u^{k} - \tilde{u} \bigr\Vert \ge\bigl\Vert \tilde{u} -u^{\infty}\bigr\Vert - \bigl\Vert u^{k} - u^{\infty} \bigr\Vert \ge\frac{\delta}{2} \quad \forall k\ge k_{0}. $$
This contradicts the assumption that \(\tilde{u}\) is a cluster point of \(\{u^{k}\}\). Thus, the sequence \(\{u^{k}\}\) converges to \(u^{\infty} \in S^{*}\). □

4 Numerical experiments

In order to verify the theoretical assertions, we consider the nonlinear complementarity problems
$$ u \ge0,\qquad T(u)\ge0, \qquad u^{\top} T(u) = 0, $$
(36)
where \(T(u)=D(u)+Mu+q\), \(D(u)\) and \(Mu+q\) are the nonlinear part and linear part of \(T(u)\), respectively. Problem (36) is equivalent to problem (1) by taking \(K = \mathbb{R}^{n}_{+}\).
Example 1
We form the linear part in the test problems as
$$ M=\left ( \begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1 & 2 & 2 & \cdots& 2 \\ 0 & 1 & 2 & \cdots& 2 \\ 0 & 0 & 1 & \cdots& 2 \\ \vdots& \vdots& \vdots& \ddots& \vdots \\ 0 & 0 & 0 & \cdots& 1 \end{array} \right )_{n \times n} \quad \mbox{and}\quad q=\left ( \begin{array}{@{}c@{}} -1 \\ -1 \\ -1 \\ \vdots\\ - 1 \end{array} \right )_{n \times1}. $$
In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\).
In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), and \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. All codes were written in Matlab. We compare Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\), with Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) and the method in [14]. The test results for problem (36) with different dimensions are reported in Tables 1 and 2. k is the number of iterations and l denotes the number of evaluations of mapping T.
Table 1
Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.8}\) (see ( 9 ))
Dimension of the problem
The method in [ 14 ]
Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\)
Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\)
k
l
CPU (sec.)
k
l
CPU (sec.)
k
l
CPU (sec.)
n = 100
228
465
0.024
138
447
0.023
73
240
0.06
n = 300
259
540
0.04
185
580
0.03
103
332
0.09
n = 500
531
1,109
0.15
227
700
0.10
128
403
0.16
n = 600
520
1,160
0.22
225
701
0.13
135
419
0.20
n = 800
568
1,236
0.51
244
779
0.34
157
520
0.40
Table 2
Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.4}\) (see ( 9 ))
Dimension of the problem
The method in [ 14 ]
Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\)
Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\)
k
l
CPU (sec.)
k
l
CPU (sec.)
k
l
CPU (sec.)
n = 100
228
465
0.02
142
439
0.01
99
309
0.06
n = 300
259
540
0.04
180
553
0.03
97
302
0.08
n = 500
531
1,109
0.15
226
699
0.09
189
579
0.23
n = 600
520
1,160
0.23
251
768
0.14
129
400
0.18
n = 800
568
1,236
0.48
246
754
0.31
197
603
0.48
Example 2
We form the linear part in the test problems similarly as in Harker and Pang [28]. The matrix \(M = A^{\top}A +B\), where A is an \(n\times n\) matrix whose entries are randomly generated in the interval \((-5,+5)\), and a skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval \((-500,500)\). In \(D(u)\), the nonlinear part of \(T(u)\), the components are chosen to be \(D_{j} (u)= d_{j} *\arctan(u_{j})\), where \(d_{j}\) is a random variable in \((0,1)\). A similar type of problem was tested in [29] and [30]. In all tests we take \(\rho_{0}=1\), \(\zeta=0.7\), \(\eta_{1}=0.3\), \(\eta_{2}=0.95\), \(\mu=0.95\), \(\nu=1.95\), \(\gamma=1.9\), \(\beta_{1}=0.5\), \(\beta_{2}=1.5\). We employ \(\|e(u^{k},\rho_{k})\|\leq10^{-7}\) as the stopping criterion and choose \(u^{0}=0\) as the initial iterative points. The test results for problem (36) with different dimensions are reported in Tables 3 and 4.
Table 3
Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.8}\) (see ( 9 ))
Dimension of the problem
The method in [ 14 ]
Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\)
Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\)
k
l
CPU (sec.)
k
l
CPU (sec.)
k
l
CPU (sec.)
n = 100
318
676
0.03
93
312
0.01
68
235
0.05
n = 300
435
936
0.07
127
404
0.03
111
356
0.09
n = 500
489
1,035
0.15
146
491
0.07
129
416
0.17
n = 600
406
877
0.18
117
378
0.08
92
299
0.15
n = 800
386
832
0.64
110
359
0.29
76
249
0.28
Table 4
Numerical results for problem ( 36 ) with \(\pmb{\sigma=0.4}\) (see ( 9 ))
Dimension of the problem
The method in [ 14 ]
Algorithm 1 with \(\boldsymbol{\alpha ^{*}_{k_{1}}}\)
Algorithm 1 with \(\boldsymbol{\alpha^{*}_{k_{2}}}\)
k
l
CPU (sec.)
k
l
CPU (sec.)
k
l
CPU (sec.)
n = 100
318
676
0.02
157
793
0.02
84
298
0.06
n = 300
435
995
0.06
199
936
0.07
164
613
0.16
n = 500
489
1,035
0.14
190
769
0.12
155
550
0.22
n = 600
406
877
0.20
129
402
0.08
89
300
0.14
n = 800
386
832
0.35
169
714
0.32
89
309
0.26
Tables 1-4 show that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is very efficient even for large-scale classical nonlinear complementarity problems. Moreover, it demonstrates computationally that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) is more effective than the method presented in [14] and Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{1}}\) in the sense that Algorithm 1 with \(\alpha_{k}=\alpha^{*}_{k_{2}}\) needs fewer iterations and less evaluation numbers of T, which clearly illustrates its efficiency.

Acknowledgements

The research part of second author was done during his visit to King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. In this research, third author was partially supported by the grant MOST 103-2115-M-037-001.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have equal contribution. All authors read and approved the final manuscript.
Literatur
1.
Zurück zum Zitat Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I 7, 91-140 (1964) MATHMathSciNet Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I 7, 91-140 (1964) MATHMathSciNet
2.
Zurück zum Zitat Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413-4416 (1964) MATHMathSciNet Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris 258, 4413-4416 (1964) MATHMathSciNet
3.
Zurück zum Zitat Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968) Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968)
4.
Zurück zum Zitat Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014) MATH Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014) MATH
5.
Zurück zum Zitat Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003) Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)
6.
Zurück zum Zitat Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981) MATH Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981) MATH
7.
Zurück zum Zitat Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980) MATH Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980) MATH
8.
Zurück zum Zitat Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001) CrossRefMATH Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001) CrossRefMATH
9.
Zurück zum Zitat Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120-127 (1987) CrossRefMATHMathSciNet Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120-127 (1987) CrossRefMATHMathSciNet
10.
Zurück zum Zitat Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996) CrossRef Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996) CrossRef
11.
Zurück zum Zitat Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976) MATH Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon 12, 747-756 (1976) MATH
12.
Zurück zum Zitat Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997) CrossRefMATHMathSciNet Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997) CrossRefMATHMathSciNet
13.
Zurück zum Zitat Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A 104, 39-68 (2005) CrossRefMATH Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A 104, 39-68 (2005) CrossRefMATH
14.
Zurück zum Zitat Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436-446 (2006) CrossRefMATHMathSciNet Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput. 183, 436-446 (2006) CrossRefMATHMathSciNet
15.
Zurück zum Zitat Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl. 57, 230-239 (2009) CrossRefMATHMathSciNet Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl. 57, 230-239 (2009) CrossRefMATHMathSciNet
16.
Zurück zum Zitat Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math. 214, 345-355 (2008) CrossRefMATHMathSciNet Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math. 214, 345-355 (2008) CrossRefMATHMathSciNet
17.
Zurück zum Zitat He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl. 112, 111-128 (2002) CrossRefMATHMathSciNet He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl. 112, 111-128 (2002) CrossRefMATHMathSciNet
18.
Zurück zum Zitat He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002) CrossRefMATHMathSciNet He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl. 112, 129-143 (2002) CrossRefMATHMathSciNet
19.
Zurück zum Zitat Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl. 42, 969-979 (2001) CrossRefMATHMathSciNet Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl. 42, 969-979 (2001) CrossRefMATHMathSciNet
20.
Zurück zum Zitat Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559-585 (2003) CrossRefMATHMathSciNet Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559-585 (2003) CrossRefMATHMathSciNet
21.
Zurück zum Zitat Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309, 136-150 (2005) CrossRefMATHMathSciNet Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl. 309, 136-150 (2005) CrossRefMATHMathSciNet
22.
Zurück zum Zitat He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 300, 362-374 (2004) CrossRefMATHMathSciNet He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl. 300, 362-374 (2004) CrossRefMATHMathSciNet
23.
Zurück zum Zitat Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971) Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971)
24.
Zurück zum Zitat Calamai, PH, Moré, JJ: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93-116 (1987) CrossRefMATH Calamai, PH, Moré, JJ: Projected gradient methods for linearly constrained problems. Math. Program. 39, 93-116 (1987) CrossRefMATH
25.
Zurück zum Zitat Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936-964 (1984) CrossRefMATHMathSciNet Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim. 22, 936-964 (1984) CrossRefMATHMathSciNet
26.
Zurück zum Zitat Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program. 86, 367-386 (1999) CrossRefMATHMathSciNet Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program. 86, 367-386 (1999) CrossRefMATHMathSciNet
27.
Zurück zum Zitat He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 27, 247-267 (2004) CrossRefMATHMathSciNet He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl. 27, 247-267 (2004) CrossRefMATHMathSciNet
28.
Zurück zum Zitat Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265-284 (1990) MathSciNet Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math. 26, 265-284 (1990) MathSciNet
29.
Zurück zum Zitat Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett. 6, 35-42 (1987) CrossRefMATHMathSciNet Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett. 6, 35-42 (1987) CrossRefMATHMathSciNet
30.
Zurück zum Zitat Taji, K, Fukushima, M, Ibaraki, T: A globally convergent Newton method for solving strongly monotone variational inequalities. Math. Program. 58, 369-383 (1993) CrossRefMATHMathSciNet Taji, K, Fukushima, M, Ibaraki, T: A globally convergent Newton method for solving strongly monotone variational inequalities. Math. Program. 58, 369-383 (1993) CrossRefMATHMathSciNet
Metadaten
Titel
A projection descent method for solving variational inequalities
verfasst von
Abdellah Bnouhachem
Qamrul Hasan Ansari
Ching-Feng Wen
Publikationsdatum
01.12.2015
Verlag
Springer International Publishing
Erschienen in
Journal of Inequalities and Applications / Ausgabe 1/2015
Elektronische ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-015-0665-9

Weitere Artikel der Ausgabe 1/2015

Journal of Inequalities and Applications 1/2015 Zur Ausgabe