Skip to main content
Erschienen in: Journal of Inequalities and Applications 1/2015

Open Access 01.12.2015 | Research

A generalized Newton method of high-order convergence for solving the large-scale linear complementarity problem

verfasst von: Yajun Xie, Changfeng Ma

Erschienen in: Journal of Inequalities and Applications | Ausgabe 1/2015

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, by extending the classical Newton method, we present the generalized Newton method (GNM) with high-order convergence for solving a class of large-scale linear complementarity problems, which is based on an additional parameter and a modulus-based nonlinear function. Theoretically, the performance of high-order convergence is analyzed in detail. Some numerical experiments further demonstrate the efficiency of the proposed new method.
Hinweise

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

1 Introduction

We consider the linear complementarity problem, abbreviated as \(LCP(q,A)\), to find a vector \(u\in\mathbb{R}^{n}\) such that
$$ \left \{ \textstyle\begin{array}{@{}l} u\geq0, \\ w:=Au+q\geq0,\\ w^{T}u=0, \end{array}\displaystyle \right . $$
(1.1)
where \(A\in\mathbb{R}^{n\times n}\) and \(q\in\mathbb{R}^{n}\) are a given real matrix and a real vector, respectively.
The linear complementarity problem frequently arises in various scientific and engineering applications, such as Nash equilibrium point of a bimatrix game, the contract problem, and the free boundary problem for journal bearings; see [13].
In [3], Lemke proposed first a solution for linear complementarity problem. Along these ideas, Scarf has given the approximation of fixed-points of a continuous mapping [4]. The relationship between the linear complementarity problem and the fixed-points problem is well described by Eaves et al. [5, 6].
Many efficient methods were developed to solve linear complementarity problem. Especially, when the system matrix A is large and sparse. For instance, we have the projected successive overrelaxation iteration [7] and the general fixed-point iterations [8]. On the matrix splitting iterations approaches, Bai et al. have derived some fruitful research results [912], especially, in [9], Bai proposed the modulus-based matrix splitting iteration scheme which is a powerful method for solving the linear complementarity problem. Matrix multisplitting iteration aspects, also, in many works were considered by Bai et al. to solve the linear complementarity problem in [1317]. A variety of accelerated modulus-based matrix splitting iteration versions were also established; see [18, 19]. Furthermore, the modulus-based synchronous multisplitting iteration methods for large sparse linear complementarity problems are introduced in [20]. On the basis of [20], Ljiljana et al. avoided the assumption of the parameter constraint and improved the convergence area [21]. Recently, in [22], by the vector divisions and the secant method, Foutayeni et al. investigated an efficient hybrid method for solving the linear complementarity problem.
As is well known the semismooth (or smooth) Newton method is very efficient for some nonsmooth (or smooth) equations, which arise from the complementarity problem, the nonlinear programming problem, the variational inequality problem, the discretization problem of partial differential equations, the maximal monotone operator problem, etc.; see [2325] for a detailed discussion. These methods are competitive since they converge rapidly for any sufficiently right initial guess. In order to ensure that the global convergence of semismooth Newton methods, some merit functions, such as squared norm merit functions, are often exploited; see [26] and the references therein. However, just as the statement in [27], a globalization of the semismooth Newton method for nonsmooth equations is very hard because the corresponding merit function is nondifferentiable in most cases.
In view of this, by introducing a smooth equation and some reasonable equivalent reformulations, we investigate a generalized Newton iteration method with high-order convergence rate for solving a class of large-scale linear complementarity problem, which make full use of the superiority of the second-order convergence rate of the classical Newton method. In this article, we suppose that the matrix A of the linear complementarity problem (1.1) is a P-matrix, i.e., the determinants of all principal submatrices are positive. Under this assumption, as is well known, the linear complementarity problem (1.1) has a unique solution for every q.
For simplicity of the presentation, we use the following notations throughout the paper: Let \(\mathbb{N}_{k}=\{1,2,\ldots,k\}\) denote the set of first k positive integers. For \(x\in\mathbb{R}^{n}\), \(\|x\|\) stands for the Euclidean norm. Given two real \(n\times m\) matrices \(A=(a_{ij})\) and \(B=(b_{ij})\), we write \(A\geq B\) (or \(A>B\)) if \(a_{ij}\geq b_{ij}\) (or \(a_{ij}>b_{ij}\)) hold for all \(i\in\mathbb {N}_{n}\) and \(j\in\mathbb{N}_{m}\). \(|A|\) and \(\rho(A)\) denote the absolute value and spectral radius of the matrix \(A\in \mathbb{R}^{n\times m}\), respectively. For a differential function \(F(x)\), \(F'(x)\) is referred to as the Jacobi matrix of the function \(F(x)\). For an invertible matrix A, \(A^{-1}\) denotes the inverse matrix of A. The matrix \(\operatorname{Diag}\{ a_{1}, a_{2}, \ldots, a_{n}\}\) denotes the diagonal matrix, where \(a_{i}\) (\(i\in \mathbb{N}_{n}\)) are the elements of the principal diagonal.
The outline of the paper is organized as follows. In Section 2, we first consider a generalized Newton method (GNM) with high-order convergence rate for solving a class of the linear complementarity problems (1.1). In Section 3, we analyze the performance and rate of convergence of the GNIM in detail. Some numerical experiments are given to illustrate that the GNM is efficient in Section 4. At last, we end the paper with some conclusions in Section 5.

2 The generalized Newton method

We first introduce some useful results which are vital for the equivalent reformulation as regards the \(LCP(q,A)\). Furthermore, these conclusions contribute significantly to the analysis of the convergence rate of the generalized Newton iteration method which will be presented in the following.
Lemma 2.1
The \(LCP(q,A)\) (1.1) is equivalent to the system of nonlinear equations
$$\begin{aligned} (I+A)x+(I-A)|x|-q=0, \end{aligned}$$
(2.1)
where A, q are given matrix and vector in (1.1), respectively. I is an identity matrix with appropriate dimension, \(x \in\mathbb{R}^{n}\) is a vector to be determined.
Proof
Let x be the solution of equation (2.1). Then it yields
$$\begin{aligned} |x|+x=A\bigl(|x|-x\bigr)+q. \end{aligned}$$
(2.2)
Set
$$\begin{aligned} w:=|x|+x,\qquad u:=|x|-x. \end{aligned}$$
(2.3)
It is apparent from (2.2) and (2.3) that
$$\begin{aligned} w=Au+q,\quad w\geq0, u\geq0, \quad \mbox{and} \quad w^{T}u=0, \end{aligned}$$
which means that u is the solution of \(LCP(q,A)\) (1.1).
On the other hand, we assume u is the solution of \(LCP(q,A)\) (1.1). It is obvious that
$$\begin{aligned} w\geq0, \qquad u\geq0,\qquad w^{T}u=0, \quad \mbox{where } w:=Au+q. \end{aligned}$$
We also set
$$\begin{aligned} w:=|x|+x,\qquad u:=|x|-x, \end{aligned}$$
it turns out that
$$\begin{aligned} x=\frac{1}{2}(w-u) \end{aligned}$$
satisfies equation (2.1). □
Now, we introduce the smooth function \(F: \mathbb{R}^{n+1}\rightarrow \mathbb{R}^{n}\) by
$$\begin{aligned} F(z):=(I-A)\sqrt{x^{2}+\varepsilon^{2} \mathbf{e}}+(I+A)x-q, \end{aligned}$$
(2.4)
where \(z=(\varepsilon, x^{T})^{T} \in\mathbb{R}^{n+1}\), ε is a positive variable, e is a vector with all elements equal 1, i.e., \(\mathbf{e}=(1,1,\ldots,1)^{T}\in\mathbb{R}^{n}\),
$$\begin{aligned} \sqrt{x^{2}+\varepsilon^{2} \mathbf{e}}:= \Bigl[\sqrt{x_{1}^{2}+\varepsilon ^{2}}, \sqrt{x_{2}^{2}+\varepsilon^{2}},\ldots, \sqrt{x_{n}^{2}+\varepsilon ^{2}} \Bigr]^{T}\in\mathbb{R}^{n}. \end{aligned}$$
(2.5)
We further define the nonlinear and differentiable function
$$\begin{aligned} \Psi(z):= \begin{pmatrix} \varepsilon\\ F(z) \end{pmatrix} \in\mathbb{R}^{n+1}. \end{aligned}$$
(2.6)
Lemma 2.2
\(LCP(q,A)\) (1.1) is equivalent to the nonlinear system
$$\begin{aligned} \Psi(z)=0, \end{aligned}$$
(2.7)
where \(\Psi(z)\) is defined by (2.6), \(z=(\varepsilon, x^{T})^{T} \in\mathbb{R}^{n+1}\).
Proof
The conclusion can easily be drawn by equation (2.4) and Lemma 2.1. □
Lemma 2.3
The Jacobi matrix of equation (2.6) is
$$\begin{aligned} \Psi'(z)= \begin{pmatrix} 1&\mathbf{0}\\ (I-A)g_{\varepsilon}&(I-A)D_{x}+A+I \end{pmatrix} \in \mathbb{R}^{(n+1)\times(n+1)}, \end{aligned}$$
(2.8)
where \(g(\varepsilon)=(g_{1},g_{2},\ldots,g_{n})^{T}\in\mathbb{R}^{n}\), \(g_{i}=\frac {\varepsilon}{\sqrt{x_{i}^{2}+\varepsilon^{2}}}\), \(D_{x}=\operatorname {Diag}(d_{1},d_{2},\ldots,d_{n})\in\mathbb{R}^{n\times n}\), \(d_{i}=\frac {x_{i}}{\sqrt{x_{i}^{2}+\varepsilon^{2}}}\), \(i\in\mathbb{N}_{n}\), and \(\mathbf{0}\in\mathbb{R}^{1\times n}\) is a zero vector.
Proof
It follows from the derivatives for the variable z on both sides of equation (2.6) that
$$\begin{aligned} \Psi'(z)= \begin{pmatrix} 1&\mathbf{0}\\ F'_{\varepsilon}&F'_{x} \end{pmatrix}. \end{aligned}$$
(2.9)
According to equations (2.4) and (2.5), we immediately have
$$\begin{aligned} F'_{\varepsilon}=(I-A) \biggl(\frac{\varepsilon}{\sqrt{x_{1}^{2}+\varepsilon^{2}}}, \frac{\varepsilon}{\sqrt{x_{2}^{2}+\varepsilon^{2}}},\ldots,\frac {\varepsilon}{\sqrt{x_{n}^{2}+\varepsilon^{2}}}\biggr)^{T} \in \mathbb{R}^{n} \end{aligned}$$
(2.10)
and
$$\begin{aligned} F'_{x}=(I-A)\operatorname{Diag}\biggl( \frac{x_{1}}{\sqrt{x_{1}^{2}+\varepsilon ^{2}}},\frac{x_{2}}{\sqrt{x_{2}^{2}+\varepsilon^{2}}},\ldots ,\frac{x_{n}}{\sqrt{x_{n}^{2}+\varepsilon^{2}}}\biggr)+A+I \in \mathbb{R}^{n\times n}. \end{aligned}$$
(2.11)
This completes the proof. □
Now, we show the generalized Newton iteration method for the nonlinear smooth system (2.7). A detailed description follows.
Algorithm 2.1
(The generalized Newton method)
Step 1 Input the initial guess \(z^{0}=(\varepsilon_{0}, (x^{0})^{T})^{T}\), give the matrix A and vector q and any small positive numbers \(\sigma_{1}, \sigma_{2}\in(0,1)\), preset a positive integer \(m\geq2\). Set \(k:=0\).
Step 2 Compute \(\Psi(z^{k})\), the Jacobi matrix \(\Psi'(z^{k})\), and its inverse matrix \(A_{k}:=(\Psi'(z^{k}))^{-1}\).
Step 3 Set \(z^{k,1}=z^{k}\), \(j:=1\).
Step 4 Evaluate \(\Psi(z^{k,j})\), update the vector sequence
$$\begin{aligned} z^{k,j+1}=z^{k,j}-A_{k} b, \end{aligned}$$
then calculate \(\Psi(z^{k,j+1})\), where \(b=\Psi(z^{k,j})\).
Step 5 Set \(j:=j+1\), \(z^{k,j}=z^{k,j+1}\), \(\Psi(z^{k,j})=\Psi (z^{k,j+1})\), \(y:=A_{k}b\). If \(j=m\), return to Step 6, otherwise go to Step 4.
Step 6 If \(\|y\|<\sigma_{1}\) or \(\|\Psi(z^{k,m})\|<\sigma_{2}\), let \(z^{k,m}=z^{*}\), otherwise \(k:=k+1\), return to Step 3.
The generalized Newton iteration method (GNIM) also can be written with the following iterative scheme:
$$ \left \{ \textstyle\begin{array}{@{}l} z^{k,0}=z^{k},\\ z^{k,j}=z^{k,j-1}-(\Psi'(z^{k}))^{-1}\Psi(z^{k,j-1}), \quad j=1,2,\ldots,m,\\ x^{k+1}=x^{k,m}, \quad k=0,1,2,\ldots. \end{array}\displaystyle \right . $$
(2.12)
Remark 2.1
From Lemma 2.2, we know that the iterative solution \(z^{*}\) generated by Algorithm 2.1 is also the solution \(u^{*}\) of \(LCP(q,A)\) (1.1).
Remark 2.2
The update of parameter \(\varepsilon_{k}\) can be chosen with \(\varepsilon _{k}=\varepsilon_{k-1}^{m}\). Since the positive integer m is selected at least greater than or equal to 2 in Algorithm 2.1, the positive sequence \(\{\varepsilon_{k}\}_{0}^{\infty}\) declines monotonically and tends to zero.
Remark 2.3
Once we set \(m=1\), then the GNIM reduces to the classical Newton iterative method.

3 The analysis of convergence

Definition 3.1
([28])
Let \(F: D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(x^{*}\in D\) is the solution of system \(F(x)=0\). There is a region \(S\subset D\) for the point \(x^{*}\), for any initial approximation \(x^{0}\in S\), if the iteration sequence \(\{x^{k}, k=0,1,\ldots\}\) is always well defined and converges to \(x^{*}\), we call it the attractive point of the iteration sequence.
The classical Newton iteration features a convergence rate of at least order two. We have the following results; for more details see [28] and the references therein.
Lemma 3.1
([28])
Let \(F: D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be \(Fr\acute {e}chet\) differentiable on the open interval of \(S_{0}\in D\), and \(F'(x^{*})\) be nonsingular, where \(x^{*}\) is the solution of system \(F(x)=0\). Then the mapping \(G(x)=x-F'(x)^{-1}F(x)\) is well defined on S for the closed sphere \(S=\bar{S}(x^{*},\delta)\subset S_{0}\). Moreover, if the inequality
$$\begin{aligned} \bigl\| F'(x)-F'\bigl(x^{*}\bigr)\bigr\| \leq\beta\bigl\| x-x^{*}\bigr\| \end{aligned}$$
(3.1)
holds, where β is a constant, \(x\in S\), then the classical Newton iteration method has at least a convergence rate of order two.
Lemma 3.2
([28])
Let \(F: D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be Fréchet differentiable on the fixed-point \(x^{*}\in \operatorname{int}(D)\), and the spectral radius of \(F'(x^{*})\)
$$\begin{aligned} \rho\bigl(F'\bigl(x^{*}\bigr)\bigr)=\sigma\leq1. \end{aligned}$$
(3.2)
Then \(x^{*}\) is the attractive point of the iterative sequence \(x^{k+1}=F(x^{k}) \) (\(k=0,1,\ldots\)) for the open sphere \(S=S(x^{*},\delta)\subset D\) and any initial guess \(x^{0}\in S\).
Lemma 3.3
[28]
Let \(F: D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) be continuous and differentiable on the convex set \(D_{0}\subset D\), and satisfy
$$\begin{aligned} \bigl\| F'(u)-F'(v)\bigr\| \leq\beta\|u-v\|^{p},\quad u, v \in D_{0}. \end{aligned}$$
(3.3)
Then we have
$$\begin{aligned} \bigl\| F(y)-F(x)-F'(x) (y-x)\bigr\| \leq\frac{\beta}{1+p}\|y-x \|^{1+p},\quad x, y\in D_{0}, \end{aligned}$$
(3.4)
where the constants \(\beta\geq0\), \(p\geq0\).
We are now in a position to derive the main convergence result of the generalized Newton iteration method.
Theorem 3.1
Let \(\Psi: D\subset\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n+1}\) be \(Fr\acute{e}chet\) differentiable on the circle region of \(S(z^{*},\delta)\in D\), and \(\Psi'(z^{*})\) be nonsingular, where \(z^{*}\) is the solution of \(\Psi(z)=0\). Assume that for arbitrary \(z\in S\), there is a constant \(\beta>0\) such that
$$\begin{aligned} \bigl\| \Psi'(z)-\Psi'\bigl(z^{*}\bigr)\bigr\| \leq\beta\bigl\| z-z^{*}\bigr\| \end{aligned}$$
(3.5)
holds. Then \(z^{*}\) is the attractive point of the iterative sequence \(\{z^{k}\} _{0}^{\infty}\) generated by Algorithm  2.1, and
$$\begin{aligned} \bigl\| z^{k+1}-z^{*}\bigr\| \leq L_{2}\bigl\| z^{k}-z^{*} \bigr\| ^{m+1}, \end{aligned}$$
(3.6)
where \(L_{2}\) is a constant independent of iteration number k.
Proof
To begin with, we consider the iterative equation (2.12) with the case \(m=2\), i.e.,
$$\begin{aligned} z^{k+1}=z^{k}-\bigl(\Psi' \bigl(z^{k}\bigr)\bigr)^{-1} \bigl[\Psi\bigl(z^{k} \bigr)-\Psi \bigl(z^{k}-\bigl(\Psi '\bigl(z^{k} \bigr)\bigr)^{-1}\Psi\bigl(z^{k}\bigr) \bigr) \bigr], \quad k=0,1, \ldots. \end{aligned}$$
(3.7)
By applying Lemma 3.1, \(P(z):=z-(\Psi'(z))^{-1}\Psi(z)\) is well defined on \(S_{1}:=\bar{S}(z^{*}, \delta_{1})\subset S\) and
$$\begin{aligned} \bigl\| P(z)-z^{*}\bigr\| \leq\xi\bigl\| z^{k}-z^{*}\bigr\| ^{2},\quad z\in S_{1}, \end{aligned}$$
(3.8)
where ξ is a positive constant. Hence, the mapping \(M(z)=P(z)-(\Psi'(z))^{-1}\Psi(P(z))\) is well defined on the closed sphere \(S_{2}=\bar{S}(z^{*},\delta_{2})\subset S_{1}\), where \(\delta_{2}\leq\frac{\delta_{1}}{\xi}\) and
$$\begin{aligned} M'\bigl(z^{*}\bigr)=I-\bigl(\Psi'\bigl(z^{*}\bigr) \bigr)^{-1}\Psi\bigl(P\bigl(z^{*}\bigr)\bigr)=0. \end{aligned}$$
So, \(\rho(M'(z^{*}))=0<1\). It follows from Lemma 3.2 that \(z^{*}\) is the attractive point of equation (3.7).
On the other hand, note the nonsingularity of \(\Psi'(z^{*})\), hence \(\|(\Psi'(z))^{-1}\|\leq \zeta\) for \(z\in S_{2}\). Then it follows from Lemma 3.3 and the assumptions that
$$\begin{aligned} \bigl\| M(z)-z^{*}\bigr\| \leq& \bigl\| \bigl(\Psi'(z) \bigr)^{-1} \bigr\| \bigl\| \Psi '(z)\bigl[P(z)-z^{*}\bigr]-\Psi \bigl(P(z) \bigr) \bigr\| \\ \leq&\zeta \bigl[ \bigl\| \Psi \bigl(P(z) \bigr)-\Psi\bigl(z^{*}\bigr)-\Psi '\bigl(z^{*}\bigr) \bigl(P(z)-z^{*}\bigr) \bigr\| \\ &{}+ \bigl\| \bigl(\Psi'\bigl(z^{*}\bigr)-\Psi'(z) \bigr) \bigl(P(z)-z^{*}\bigr) \bigr\| \bigr] \\ \leq&\zeta \biggl[\frac{1}{2}\beta \bigl\| P(z)-z^{*} \bigr\| ^{2}+ \beta\bigl\| z-z^{*}\bigr\| \bigl\| P(z)-z^{*}\bigr\| \biggr] \\ \leq&\beta\zeta \biggl[\frac{1}{2}\xi^{2}\bigl\| z-z^{*}\bigr\| +\xi \biggr]\bigl\| z-z^{*}\bigr\| ^{3} \\ \leq&\beta\zeta\xi \biggl[\frac{1}{2}\xi\delta_{2}+1 \biggr]\bigl\| z-z^{*}\bigr\| ^{3}. \end{aligned}$$
(3.9)
Let \(L_{0}:=\beta\zeta\xi [\frac{1}{2}\xi\delta_{2}+1 ]\), evidently, it is a constant independent of the iteration number k.
When we choose \(z^{k+1}=M(z^{k})\) for the left of the inequality (3.9), it yields
$$\begin{aligned} \bigl\| z^{k+1}-z^{*}\bigr\| \leq L_{0}\bigl\| z^{k}-z^{*} \bigr\| ^{3}, \end{aligned}$$
(3.10)
which implies (3.6) holds for \(m=2\).
Now, we state that the iterative scheme (2.12) has at least a convergence rate of order \(m+1\). The result will be shown by mathematical induction.
Noting that (3.9), one has the argument
$$\begin{aligned} \bigl\| z^{k,2}-z^{*}\bigr\| \leq L_{0}\bigl\| z^{k}-z^{*} \bigr\| ^{3}=O\bigl(\bigl\| z^{k}-z^{*}\bigr\| ^{2}\bigr). \end{aligned}$$
(3.11)
Next, we assume that
$$\begin{aligned} \bigl\| z^{k,m-1}-z^{*}\bigr\| \leq L_{1}\bigl\| z^{k}-z^{*} \bigr\| ^{m}=O\bigl(\bigl\| z^{k}-z^{*}\bigr\| ^{m}\bigr) \end{aligned}$$
(3.12)
holds. Now, we will verify the statement
$$\begin{aligned} \bigl\| z^{k,m}-z^{*}\bigr\| \leq L_{2}\bigl\| z^{k}-z^{*} \bigr\| ^{m+1}=O\bigl(\bigl\| z^{k}-z^{*}\bigr\| ^{m+1}\bigr), \end{aligned}$$
(3.13)
where \(L_{1}\), \(L_{2}\) are constants independent of the iteration number k.
As a matter of fact, observing (2.12), also by Lemma 3.3 and (3.12), one obtains
$$\begin{aligned} \bigl\| z^{k,m}-z^{*}\bigr\| \leq& \bigl\| \bigl(\Psi' \bigl(z^{k}\bigr)\bigr)^{-1} \bigr\| \bigl[ \bigl\| \Psi \bigl(z^{k,m-1}\bigr)-\Psi\bigl(z^{*}\bigr)-\Psi'\bigl(z^{*} \bigr) \bigl(z^{k,m-1}-z^{*}\bigr) \bigr\| \\ &{}+ \bigl\| \bigl(\Psi'\bigl(z^{*}\bigr)-\Psi' \bigl(z^{k}\bigr) \bigr) \bigl(z^{k,m-1}-z^{*}\bigr) \bigr\| \bigr] \\ \leq&\zeta \biggl[\frac{1}{2}\beta \bigl\| z^{k,m-1}-z^{*} \bigr\| ^{2}+\beta\bigl\| z^{k}-z^{*}\bigr\| \bigl\| z^{k,m-1}-z^{*}\bigr\| \biggr] \\ \leq&\zeta\beta \biggl[\frac{1}{2}L_{1}^{2} \bigl\| z^{k}-z^{*} \bigr\| ^{m-1}+L_{1} \biggr] \bigl\| z^{k}-z^{*}\bigr\| ^{m+1} \\ \leq&\zeta\beta \biggl[\frac{1}{2}L_{1}^{2} \delta_{2}^{m-1}+L_{1} \biggr]\bigl\| z^{k}-z^{*}\bigr\| ^{m+1} \\ =&L_{2}\bigl\| z^{k}-z^{*}\bigr\| ^{m+1}=O\bigl( \bigl\| z^{k}-z^{*}\bigr\| ^{m+1}\bigr), \end{aligned}$$
(3.14)
where \(L_{2}:=\zeta\beta [\frac{1}{2}L_{1}^{2}\delta_{2}^{m-1}+L_{1} ]\), which completes the proof. □

4 Numerical experiments

In this section, we report some numerical results to illustrate the effectiveness of the GNIM for solving the linear complementarity problem. We compare the GNIM with the Fischer-based semismooth Newton method which we call FBSN (see [29, 30]), and the cosh-based smoothing Newton method which we call CBSN (see [31]) by the iteration step (denoted as ‘IT’), the elapsed CPU time in seconds (denoted as ‘CPU’), and the residual error (denoted as ‘RES’). In actual computations, all runs are terminated if the current iterations satisfy
$$\begin{aligned} \operatorname{RES}:=\bigl\| \Psi\bigl(z^{k}\bigr)\bigr\| < 10^{-14}, \end{aligned}$$
or if the number of iterations exceeds the prescribed number of iteration steps \(k_{\max}\). The numerical experiments have been carried out by MATLAB R2011b (7.13), Intel(R) Core(TM) i7-2670QM, CPU 2.20GHZ, RAM 8.GB PC Environment, and Windows 7 operating system.
In fact, our approach can be considered as the version of speeding up on the basis of the smoothing Newton method. Thereby, high precision is the advantage of the GNIM. Once we set \(m=1\), the GNIM will reduce to the classical Newton method. But we know that the larger m may lead to the consumption of more CPU time since there is an increased number of inner iterations. We usually choose \(m=2\) in concrete tests, which also can ensure the rapid convergence rate. To confirm this judgment, we can observe the following examples.
Example 4.1
([27])
We consider the linear complementarity problem (1.1) with
$$\begin{aligned} A=\operatorname{tridiag}(-1, 4,-1),\qquad q=(-1,\ldots,-1)^{T}. \end{aligned}$$
Example 4.2
We consider the linear complementarity problem (1.1) with \(A=(a_{ij})\), \(i, j\in\mathbb{N}_{n}\), where \(a_{ij}=\frac{i\delta _{ij}}{n}\), δ is the Kronecker delta (\(\delta_{ii}=1\), \(\delta_{ij}=0\), when \(i\neq j\)), and \(q=(q_{i})\), \(i\in \mathbb{N}_{n}\) such that \(q_{i}=-1\).
In these examples, we choose \(m=2\) for the GNIM, and \(\rho=0.15\), \(\beta =0.25\), \(p=3.0\) for the FBSN (for more details, see [29]). Especially, the RES will be regarded as 0 if \(\operatorname{RES}<10^{-17}\) in the numerical results. The initial guess will be selected with \(z_{(1)}^{0}\) or \(z_{(2)}^{0}\) by \([\varepsilon_{0},q^{T}]^{T}\) or the zero vector, respectively. First, we compare the performance of the numerical results of the GNIM with the CBSN by arranging different n. From Tables 1 and 2, we find the GNIM and the CBSN are all effective methods whose solution pairs \((u^{*}, w^{*})\) are also displayed in the last column of tables. However, from the aspects of the iteration number or CPU, the CBSN does not stand comparison with our method. In Table 3, we provide the numerical results for the GNIM and the CBSN with larger dimensions. Furthermore, the merit of the GNIM is reflected clearly and incisively.
Table 1
The GNIM numerical results for Example 4.1 with initial \(\pmb{z_{(1)}^{0}}\)
  
The performance of numerical results
The solution pairs \(\boldsymbol {(u^{*},w^{*})}\)
n = 3
It
3
u∗ = (0.3571,0.4286,0.3571)
CPU
0.0068
 
RES
0
w∗ = (0,0,0)
n = 5
It
3
u∗ = (0.3654,0.4615,0.4808,0.4615,0.3654)
CPU
0.0073
 
RES
0
w∗ = 1.0e − 015∗(0.4441,0,0,0,0)
n = 8
It
3
\(\begin{array}{lcl}u*&=&(0.3660,0.4641,0.4902,0.4967,\\ &&{}0.4967,0.4902,0.4641,0.3660) \end{array}\)
CPU
0.0074
 
RES
0
w∗ = (0,0,0,0,0,0,0,0)
n = 10
It
3
\(\begin{array}{lcl}u*&=&(0.3660,0.4641,0.4904,0.4974,0.4991,\\ &&{}0.4991,0.4974,0.4904,0.4641,0.3660) \end{array}\)
CPU
0.0075
 
RES
0
\(\begin{array}{lcl} w*&=&1.0e\!-\!015*(-0.1110,0.4441,-0.4441,\\ &&{}0,0,0,0.2220,0,0,0) \end{array}\)
Table 2
The CBSN numerical results for Example 4.1 with initial \(\pmb{z_{(1)}^{0}}\)
  
The performance of numerical results
The solution pairs \(\boldsymbol {(u^{*},w^{*})}\)
n = 3
It
4
u∗ = (0.3571,0.4286,0.3571)
CPU
0.0104
 
RES
0
w∗ = 1.0e − 011∗(0.8880,−0.5869,0.8880)
n = 5
It
4
u∗ = (0.3654,0.4615,0.4808,0.4615,0.3654)
CPU
0.0106
 
RES
6.1063e − 016
w∗ = 1.0e − 015∗(0,0,0,0,−0.1110)
n = 8
It
4
\(\begin{array}{lcl} u*&=&(0.3660,0.4641,0.4902,0.4967,0.4967,\\ &&{}0.4902,0.4641,0.3660) \end{array}\)
CPU
0.0113
 
RES
6.2042e − 016
\(\begin{array}{lcl} w*&=&1.0e\!-\!011*(0.4839,-0.1609,-0.0001,0,0,\\ &&{}-0.0001,-0.1608,0.4839) \end{array}\)
n = 10
It
3
\(\begin{array}{lcl} u*&=&(0.3660,0.4641,0.4904,0.4974, 0.4991,\\ &&{}0.4991,0.4974,0.4904,0.4641,0.3660) \end{array}\)
CPU
0.0115
 
RES
6.2232e − 016
\(\begin{array}{lcl} w*&=&1.0e\!-\!011*(0.4835,-0.1607,-0.0001,0,0,\\ &&{}0,0,-0.0001,-0.1607,0.4835) \end{array}\)
Table 3
Numerical results for Example 4.1
Initials
 
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
Methods
 
GNIM
CBSN
GNIM
CBSN
n = 50
It
2
3
3
4
CPU
0.0084
0.0250
0.0119
0.0447
RES
5.7787e − 016
1.7554e − 016
5.3787e − 016
1.8619e − 016
n = 100
It
2
3
3
4
CPU
0.0135
0.0230
0.0142
0.0364
RES
6.1815e − 016
2.1678e − 016
5.6610e − 016
2.7616e − 016
n = 300
It
2
3
3
4
CPU
0.0376
0.0547
0.0562
0.0905
RES
6.2315e − 016
2.2648e − 016
5.7610e − 016
2.9616e − 016
It is observed from Table 4 that the GNIM is much more competitive than the FBSN for the large-scale linear complementarity problem. The numerical results of Example 4.2 is shown by Tables 5-7. Especially, from Table 7, we find that the FBSN is unable to obtain the convergence result in less iteration steps, such as within 11, for Example 4.2.
Table 4
Numerical results for Example 4.1
Initials
 
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
Methods
 
GNIM
FBSN
GNIM
FBSN
n = 500
It
2
11
3
11
CPU
0.0886
0.3229
0.1366
0.3831
RES
6.1815e − 016
6.6613e − 016
5.6610e − 016
1.1047e − 015
n = 1000
It
2
11
3
11
CPU
0.5114
1.0758
0.7728
1.3069
RES
6.2113e − 016
6.6723e − 016
5.6720e − 016
1.3027e − 015
n = 3000
It
2
11
3
11
CPU
9.1418
26.6155
11.1484
11.5429
RES
6.2812e − 016
6.6921e − 016
5.5610e − 016
1.2123e − 015
n = 5000
It
2
11
3
11
CPU
42.9507
71.3675
66.0831
79.3341
RES
6.4615e − 016
6.6990e − 016
5.7810e − 016
1.3056e − 015
Table 5
The GNIM numerical results for Example 4.2 with initial \(\pmb{z_{(1)}^{0}}\)
  
The performance of numerical results
The solution pairs \(\boldsymbol {(u^{*},w^{*})}\)
n = 3
It
2
u∗ = (3,1.5,1)
CPU
0.0067
 
RES
0
w∗ = (0,0,0)
n = 5
It
2
u∗ = (5,2.5,1.6667,1.25,1)
CPU
0.0070
 
RES
0
w∗ = 1.0e − 015∗(0.2220,0,0,0,0)
n = 8
It
2
u∗ = (8,4,2.6667,2,1.6,1.3333,1.1429,1)
CPU
0.0072
 
RES
0
w∗ = (0,0,0,0,0,0,0,0)
n = 10
It
2
\(\begin{array}{lcl} u*&=&(10,5,3.3333,2.5,2,1.6667,1.4286,\\ &&{}1.25,1.1111,1) \end{array}\)
CPU
0.0075
 
RES
0
\(\begin{array}{lcl}w*&=&1.0e\!-\!015*(0,0.2220,-0.1110,0,0,\\ &&{}0,0.2220,0,0,0) \end{array}\)
Table 6
The CBSN numerical results for Example 4.2 with initial \(\pmb{z_{(1)}^{0}}\)
  
The performance of numerical results
The solution pairs \(\boldsymbol {(u^{*},w^{*})}\)
n = 3
It
4
u∗ = (3,1.5,1)
CPU
0.0155
 
RES
0
w∗ = (0,0,0)
n = 5
It
4
u∗ = (5,2.5,1.6667,1.25,1)
CPU
0.0161
 
RES
0
w∗ = 1.0e − 015∗(0.4441,0,0,0,0)
n = 8
It
4
u∗ = (8,4,2.6667,2,1.6,1.3333,1.1429,1)
CPU
0.0163
 
RES
2.2204e − 016
\(\begin{array}{lcl} w*&=&1.0e\!-\!011*(-0.1110,0.2220,0,0,\\ &&{}0.2220,0,0,0) \end{array}\)
n = 10
It
4
\(\begin{array}{lcl} u*&=&(10,5,3.3333,2.5,2,1.6667,\\ &&{}1.4286,1.25,1.1111,1) \end{array}\)
CPU
0.0147
 
RES
8.8818e − 016
\(\begin{array}{lcl} w*&=&1.0e\!-\!014*(-0.1221,0.0444,-0.0111,\\ &&{}0.0222,0,-0.0111,0.0222,0,0,0) \end{array}\)
Table 7
Numerical results for Example 4.2
Initials
 
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(1)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
\(\boldsymbol {z^{0}_{(2)}}\)
Methods
 
GNIM
FBSN
GNIM
FBSN
n = 500
It
2
11
3
11
CPU
0.0957
0.3025
0.1421
0.3308
RES
2.3182e − 015
1.7073e + 001
2.1006e − 015
1.8073e + 001
n = 1000
It
2
11
3
11
CPU
0.5016
0.9461
0.6327
1.1754
RES
6.6438e − 014
2.4129e + 001
6.6351e − 014
2.6249e + 001
To sum up, by all numerical results, we illustrate the effectiveness of the GNIM.

5 Conclusion

In this paper, we establish the generalized Newton method (GNM) for solving the large-scale linear complementarity problem. The GNM features the high-order convergence rate, at least convergence order \(m+1\), which has been verified from theoretical analysis section in detail. The new strategy will increase the efficiency remarkably. In fact, it can be regarded as an accelerated process for the classical Newton approach. Experimental tests provide the comparison of the numerical performance for some existing efficient methods, which testify to the viability of the proposed approach.

Acknowledgements

The Project Supported by National Natural Science Foundation of China (Grant Nos. 11071041, 11201074), Fujian Natural Science Foundation (Grant Nos. 2015J01578, 2013J01006) and The Outstanding Young Training Plan of Fujian Province universities (Grant No. 15kxtz13, 2015).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Competing interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Literatur
1.
Zurück zum Zitat Cottle, RW, Pang, J-S, Stone, RE: The Linear Complementarity Problem. Academic Press, San Diego (1992) MATH Cottle, RW, Pang, J-S, Stone, RE: The Linear Complementarity Problem. Academic Press, San Diego (1992) MATH
2.
Zurück zum Zitat Murty, KG: Linear Complementarity, Linear and Nonlinear Programming. Heldermann, Berlin (1988) MATH Murty, KG: Linear Complementarity, Linear and Nonlinear Programming. Heldermann, Berlin (1988) MATH
7.
Zurück zum Zitat Cryer, C: The solution of a quadratic programming using systematic overrelaxation. SIAM J. Control Optim. 9, 385-392 (1971) MathSciNetCrossRef Cryer, C: The solution of a quadratic programming using systematic overrelaxation. SIAM J. Control Optim. 9, 385-392 (1971) MathSciNetCrossRef
8.
Zurück zum Zitat Tseng, P: On linear convergence of iterative method for the variational inequality problem. J. Comput. Appl. Math. 60, 237-252 (1995) MATHMathSciNetCrossRef Tseng, P: On linear convergence of iterative method for the variational inequality problem. J. Comput. Appl. Math. 60, 237-252 (1995) MATHMathSciNetCrossRef
9.
Zurück zum Zitat Bai, Z-Z: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917-933 (2010) MATHMathSciNetCrossRef Bai, Z-Z: Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 17, 917-933 (2010) MATHMathSciNetCrossRef
10.
Zurück zum Zitat Bai, Z-Z, Evans, DJ: Matrix multisplitting methods with applications to linear complementarity problems: parallel synchronous and chaotic methods. In: Calculateurs Parallèles Réseaux et Systèmes Répartis, vol. 13, pp. 125-141 (2001) Bai, Z-Z, Evans, DJ: Matrix multisplitting methods with applications to linear complementarity problems: parallel synchronous and chaotic methods. In: Calculateurs Parallèles Réseaux et Systèmes Répartis, vol. 13, pp. 125-141 (2001)
11.
Zurück zum Zitat Bai, Z-Z, Evans, DJ: Matrix multisplitting methods with applications to linear complementarity problems: parallel asynchronous methods. Int. J. Comput. Math. 79, 205-232 (2002) MATHMathSciNetCrossRef Bai, Z-Z, Evans, DJ: Matrix multisplitting methods with applications to linear complementarity problems: parallel asynchronous methods. Int. J. Comput. Math. 79, 205-232 (2002) MATHMathSciNetCrossRef
12.
Zurück zum Zitat Bai, Z-Z, Zhang, L-L: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20, 425-439 (2013) MATHMathSciNetCrossRef Bai, Z-Z, Zhang, L-L: Modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 20, 425-439 (2013) MATHMathSciNetCrossRef
13.
Zurück zum Zitat Bai, Z-Z: On the convergence of the multisplitting methods for the linear complementarity problem. SIAM J. Matrix Anal. Appl. 21, 67-78 (1999) MATHMathSciNetCrossRef Bai, Z-Z: On the convergence of the multisplitting methods for the linear complementarity problem. SIAM J. Matrix Anal. Appl. 21, 67-78 (1999) MATHMathSciNetCrossRef
14.
Zurück zum Zitat Bai, Z-Z, Evans, DJ: Matrix multisplitting relaxtion methods for linear complementarity problem. Int. J. Comput. Math. 63, 309-326 (1997) MATHMathSciNetCrossRef Bai, Z-Z, Evans, DJ: Matrix multisplitting relaxtion methods for linear complementarity problem. Int. J. Comput. Math. 63, 309-326 (1997) MATHMathSciNetCrossRef
15.
Zurück zum Zitat Bai, Z-Z: Experimental study of the asynchronous multisplitting relaxtation methods for linear complementarity problems. J. Comput. Math. 20, 561-574 (2002) MATHMathSciNet Bai, Z-Z: Experimental study of the asynchronous multisplitting relaxtation methods for linear complementarity problems. J. Comput. Math. 20, 561-574 (2002) MATHMathSciNet
16.
Zurück zum Zitat Bai, Z-Z, Huang, Y-G: A class of asynchronous iterations for the linear complementarity problem. J. Comput. Math. 21, 773-790 (2003) MATHMathSciNet Bai, Z-Z, Huang, Y-G: A class of asynchronous iterations for the linear complementarity problem. J. Comput. Math. 21, 773-790 (2003) MATHMathSciNet
17.
Zurück zum Zitat Bai, Z-Z, Evans, DJ: Parallel chaotic multisplitting iterative methods for the large sparse linear complementarity problem. J. Comput. Math. 19, 281-292 (2001) MATHMathSciNet Bai, Z-Z, Evans, DJ: Parallel chaotic multisplitting iterative methods for the large sparse linear complementarity problem. J. Comput. Math. 19, 281-292 (2001) MATHMathSciNet
18.
Zurück zum Zitat Zhang, L-L: Two-step modulus based matrix splitting iteration for linear complementarity problems. Numer. Algorithms 57, 83-99 (2011) MATHMathSciNetCrossRef Zhang, L-L: Two-step modulus based matrix splitting iteration for linear complementarity problems. Numer. Algorithms 57, 83-99 (2011) MATHMathSciNetCrossRef
19.
Zurück zum Zitat Zheng, N, Yin, J-F: Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem. Numer. Algorithms 64, 245-262 (2013) MATHMathSciNetCrossRef Zheng, N, Yin, J-F: Accelerated modulus-based matrix splitting iteration methods for linear complementarity problem. Numer. Algorithms 64, 245-262 (2013) MATHMathSciNetCrossRef
20.
Zurück zum Zitat Bai, Z-Z, Zhang, L-L: Modulus-based synchronous multisplitting iteration methods for large sparse linear complementarity problem. Numer. Linear Algebra Appl. 20, 425-439 (2013) MATHMathSciNetCrossRef Bai, Z-Z, Zhang, L-L: Modulus-based synchronous multisplitting iteration methods for large sparse linear complementarity problem. Numer. Linear Algebra Appl. 20, 425-439 (2013) MATHMathSciNetCrossRef
21.
Zurück zum Zitat Ljiljana, C, Vladimir, K: A note on the convergence of the MSMAOR method for linear complementarity problems. Numer. Linear Algebra Appl. 21, 534-539 (2014) MathSciNetCrossRef Ljiljana, C, Vladimir, K: A note on the convergence of the MSMAOR method for linear complementarity problems. Numer. Linear Algebra Appl. 21, 534-539 (2014) MathSciNetCrossRef
22.
Zurück zum Zitat Foutayeni, YEL, Khaladi, M: Using vector divisions in solving the linear complementarity problem. J. Comput. Appl. Math. 236, 1919-1925 (2012) MathSciNetCrossRef Foutayeni, YEL, Khaladi, M: Using vector divisions in solving the linear complementarity problem. J. Comput. Appl. Math. 236, 1919-1925 (2012) MathSciNetCrossRef
24.
Zurück zum Zitat Jiang, H-Y, Qi, L: A new nonsmooth equations approach to nonlinear complementarity problems. SIAM J. Control Optim. 35, 178-193 (1997) MATHMathSciNetCrossRef Jiang, H-Y, Qi, L: A new nonsmooth equations approach to nonlinear complementarity problems. SIAM J. Control Optim. 35, 178-193 (1997) MATHMathSciNetCrossRef
25.
Zurück zum Zitat Chen, X, Nashed, Z, Qi, L: Smoothing methods and semismooth methods for nondifferentiable operator equations. SIAM J. Numer. Anal. 38, 1200-1216 (2000) MATHMathSciNetCrossRef Chen, X, Nashed, Z, Qi, L: Smoothing methods and semismooth methods for nondifferentiable operator equations. SIAM J. Numer. Anal. 38, 1200-1216 (2000) MATHMathSciNetCrossRef
27.
Zurück zum Zitat Sun, Z, Zeng, J-P: A monotone semismooth Newton type method for a class of complementarity problems. J. Comput. Appl. Math. 235, 1261-1274 (2011) MATHMathSciNetCrossRef Sun, Z, Zeng, J-P: A monotone semismooth Newton type method for a class of complementarity problems. J. Comput. Appl. Math. 235, 1261-1274 (2011) MATHMathSciNetCrossRef
28.
Zurück zum Zitat Huang, X-D, Zeng, Z-G, Ma, Y-N: The Theory and Methods for Nonlinear Numerical Analysis, pp. 57-59. Wuhan University Press, Wuhan (2004) Huang, X-D, Zeng, Z-G, Ma, Y-N: The Theory and Methods for Nonlinear Numerical Analysis, pp. 57-59. Wuhan University Press, Wuhan (2004)
29.
Zurück zum Zitat Luca, D, Fancchinei, F, Kanzow, C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407-439 (1996) MATH Luca, D, Fancchinei, F, Kanzow, C: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407-439 (1996) MATH
31.
Zurück zum Zitat Yu, Z, Qin, Y: A cosh-based smoothing Newton method for P0 nonlinear complementarity problem. Nonlinear Anal., Real World Appl. 12, 875-884 (2011) MATHMathSciNetCrossRef Yu, Z, Qin, Y: A cosh-based smoothing Newton method for P0 nonlinear complementarity problem. Nonlinear Anal., Real World Appl. 12, 875-884 (2011) MATHMathSciNetCrossRef
Metadaten
Titel
A generalized Newton method of high-order convergence for solving the large-scale linear complementarity problem
verfasst von
Yajun Xie
Changfeng Ma
Publikationsdatum
01.12.2015
Verlag
Springer International Publishing
Erschienen in
Journal of Inequalities and Applications / Ausgabe 1/2015
Elektronische ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-015-0937-4

Weitere Artikel der Ausgabe 1/2015

Journal of Inequalities and Applications 1/2015 Zur Ausgabe