In this paper, we reformulate a nonlinear complementarity problem or a mixed complementarity problem as a system of piecewise almost linear equations. The problems arise, for example, from the obstacle problems with a nonlinear source term or some contact problems. Based on the reformulated systems of the piecewise almost linear equations, we propose a class of semi-iterative algorithms to find the exact solution of the problems. We prove that the semi-iterative algorithms enjoy a nice monotone convergence property in the sense that subsets of the indices consisting of the indices, for which the corresponding components of the iterates violate the constraints, become smaller and smaller. Then the algorithms converge monotonically to the exact solutions of the problems in a finite number of steps. Some numerical experiments are presented to show the effectiveness of the proposed algorithms.
Hinweise
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors jointly worked on the results and they read and approved the final manuscript.
1 Introduction
Let \(F:R^{n}\to R^{n}\) be a given function. The nonlinear complementarity problem, denoted by \(\operatorname{NCP}(F,\phi)\), is to find an \(x\in R^{n}\) such that
In this paper, we focus on problem (1.1), in which the function F has the form of \(F(x)=Ax+\Psi(x)\), that is, we have the problem of finding an \(x\in R^{n}\) such that
where \(A=(a_{ij})\in R^{n\times n}\) is a given matrix, \(\phi=(\phi _{i})\in R^{n}\) is a given vector, and \(\Psi:R^{n}\to R^{n}\) is a given diagonal differentiable mapping, that is, the ith component \(\Psi_{i}\) of \(\Psi(x)\) is a function of the ith variable \(x_{i}\) only:
We denote the above problem by \(\operatorname{ALCP}(A,\Psi,\phi)\) and call it an almost linear complementarity problem (see, e.g., [1]). Obviously, if Ψ is a linear function, \(\operatorname{ALCP}(A,\Psi,\phi)\) reduces to a linear complementarity problem.
\(\operatorname{ALCP}(A,\Psi,\phi)\) has many applications, especially in engineering. For instance, it can be derived from the discrete simulations of Bratu obstacle problem [2], which models the nonlinear diffusion phenomena taking place in combustion and in semiconductors, and of some free boundary problems with nonlinear source terms, which models the diffusion problems involving Michaelis-Menten or second order irreversible reactions [3, 4].
Anzeige
In the literature, one classical approach used for solving (1.1) consists of the linearized projected relaxation methods [5, 6]. They are known to be convergent and easy to implement. However, their convergence rates depend crucially on the choice of the relaxation parameter and deteriorate heavily with mesh refinement. In order to improve the efficiency, later multigrid methods were proposed (e.g. see [7‐9]) and domain decomposition methods (e.g. see [10‐16]), in which the subproblems are generally linear complementarity problems and then preconditioners, widely used in the linear systems, may not be applied directly. Another efficient way to solve problem (1.1) is given by active set strategies (e.g. see [17‐19]). To solve the linear complementarity problem, the basic iteration of the active set strategies consists of two steps. First, based on a certain active set method, the (mesh) domain is decomposed into active and inactive parts. Then a reduced linear system associated with the inactive set can be solved by using a fast linear system solver such as the multigrid method or the preconditioned conjugate gradient method. For instance, in [17, 18], two approaches were introduced, respectively, for the elliptic and the parabolic case, where the Lagrange multiplier strategy is used in order to express the problem as a higher dimension standard equality problem. In particular, in [18] such a strategy is combined with a semi-iterative procedure based on a suitable successive update of the coincidence set (that is, the area where the solution touches the obstacle), while in [17] the solution of the parabolic variational inequality is obtained as the limit of the solutions of a family of appropriately regularized nonlinear parabolic equations. Alternately, inexact semismooth Newton methods have been developed to solve problem (1.1) based on its semismooth reformulation [20‐23]. These algorithms are attractive because they converge rapidly from any sufficiently good initial iterate and the subproblems are also systems of equations.
In this paper, we will propose some semi-iterative algorithms for the numerical treatment of an almost linear unilateral obstacle problem or mixed complementarity problem. The algorithms are obtained as applications of piecewise almost linear systems to the problem which can be considered as an extension of piecewise linear systems to the affine lower obstacle problem [24‐26]. The algorithms do not require the use of Lagrange multipliers and the involved subproblems in the algorithms are systems of almost linear equations, whose dimensions are less than n, the dimension of the original problem. We prove that the proposed algorithms enjoy a nice global monotonic convergence property in the sense that the subsets of the indices consisting of the indices, for which the corresponding components of the iterates violate the constraints, become smaller and smaller. Then the iterates converge to the exact solution in a finite number of steps. The numerical examples presented indicate the effectiveness of the proposed algorithms.
The rest of this paper is organized as follows. In Section 2 we investigate the classical obstacle problem and reformulate the problem via a piecewise almost linear system (PALS). Based on the PALS given in Section 2, we propose a semi-iterative algorithm and discuss its monotone and finite convergence in Section 3. In Section 4, semi-iterative algorithms are proposed for solving upper obstacle problems and mixed complementarity problems, respectively, via their PALS reformulations, and the monotone and finite convergence of the algorithms are also obtained. In Section 5 we present some numerical examples to investigate the efficiency of the algorithms. Finally, in Section 6, we give a few conclusions.
2 Some preliminaries
For any given vector \(x\in R^{n}\), let the diagonal function \(P_{\phi}(x)\) be defined as
3 A semi-iterative algorithm for obstacle problem (1.2)
It is to be noted that the left-hand side of system (2.2) is not everywhere differentiable even for smooth Ψ. Nevertheless, a semi-iterative algorithm for solving system (2.2) can be constructed as follows:
Algorithm 3.1
Let \(P^{0}=O\). Set \(k:=0\).
Step 1: Solve the system of finding \(x^{k+1}\), such that
Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\), let \(y=P^{k}(x^{k+1}-\phi)+\phi=\max\{x^{k+1},\phi\}\) and stop. Otherwise, go to Step 3.
Step 3: Set \(k:=k+1\) and go to Step 1.
Remark 3.1
In Algorithm 3.1, one needs to solve a system of nonlinear equations (3.1) at each iteration. Since the nonlinearity of \(F(x)=Ax+\Psi(x)\) occurs only in the diagonal function Ψ, (3.1) is a system of almost linear equations (see, e.g., [1]). Especially, when \(F(x)=Ax-b\), Algorithm 3.1 reduces to the semi-iterative Newton type algorithm for the solution of the linear complementarity problem (2.4). In this algorithm, only a system of linear equations needs to be solved at each iteration. We refer to [24] for more details.
where \(A_{IJ}\) denotes submatrix of A consisting of \(a_{ij}\) with \(i\in I\), \(j\in J\), and \(x_{I}\) denotes the subvector of x consisting of \(x_{i}\) with \(i\in I\). Consequently, the main work in solving (3.1) is to solve a system of almost linear equations with dimension \(n_{k}=\operatorname{dim}(I^{k})\leq n\).
In order to prove the convergence of Algorithm 3.1, some preliminary results are needed first (see, e.g., [24, 27]).
Lemma 3.1
LetAbe anM-matrix. Then, for any diagonal matrixP, whose diagonals are zeros or ones, the two matrices\(I- P + AP\)and\(I-P + PA\)areM-matrices, and therefore, \((I- P + AP)^{-1}\geq0\)and\((I- P + PA)^{-1}\geq0\). Furthermore, for any nonnegative diagonal matrix Γ, the matrices\(I- P + AP+\Gamma\)and\(I-P + PA+\Gamma\)areM-matrices, and then\((I- P + AP+\Gamma)^{-1}\geq0\)and\((I- P + PA+\Gamma)^{-1}\geq0\).
Define
$$\mathcal{P}=\bigl\{ P\in R^{n\times n}: P\mbox{ is a diagonal matrix with zeros or ones diagonals}\bigr\} $$
Let A be an M-matrix and \(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreasing functions. Then, for any \(P\in\mathcal{P}\), \(T_{P}\) is an M-function. In fact, for any \(x\in R^{n}\) and \(y\in R^{n}\), if \(T_{P}(x)\geq T_{P}(y)\), we have
is a diagonal matrix with nonnegative diagonals. By Lemma 3.1, \((I-P+AP+\Gamma_{P}(x,y))^{-1}\geq0\), and hence by (3.3), we have \(x\geq y\), which implies the inverse isotone of \(T_{P}\). On the other hand, for every pair of indices \(i\neq j\), and for every \(x\in R^{n}\), noting that \(A^{P}:=I-P+AP=(a^{P}_{ij})\) is an M-matrix, \(a^{P}_{ij}\leq0\). Therefore, the one-dimensional function \(g_{ij}\) in the variable t has the form
where \(x^{t,j}=(x_{1},\ldots, x_{j-1},t,x_{j+1},\ldots, x_{n})\) and \(P=\operatorname {diag}(p_{1},p_{2},\ldots,p_{n})\). It is clear that \(g_{ij}\) is nonincreasing, which implies the off-diagonal antitone of \(T_{P}\). Therefore, the inverse isotone together with the off-diagonal antitone of \(T_{P}\) implies that \(T_{P}\) is an M-function [5].
For any \(P\in\mathcal{P}\) and \(x\in \bar{\mathcal{S}} _{P}\), we have
That is, \(\bar{\mathcal{ S}} _{P}\) is bounded below. On the other hand, let \(x= \phi+ (I-P+AP)^{-1}\max\{ -\Psi(\phi)-A\phi,0\}\). It is easy to check \(x\in \bar{\mathcal{S}} _{P}\). In fact,
by the use of the monotonicity of Ψ and the relation \(P(I-P+AP)^{-1}\max\{ -\Psi(\phi)-A\phi,0\}+\phi\geq\phi\). Therefore, \(\bar{\mathcal{S}} _{P}\) is nonempty and bounded below and has a minimal element. In fact, \(\bar{\mathcal{S}}_{P}\) has unique minimal element \(x^{*}_{P}\), which solves \(T_{P}(x)=0\), i.e.\(T_{P}(x^{*}_{P})=0\) (see, e.g., [12] and the references therein). According to the above discussion, we have immediately the following result.
Lemma 3.2
LetAbe anM-matrix, and\(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm3.1is well defined.
The following two lemmas are crucial for the convergence of Algorithm 3.1.
which implies that \(x^{k+1}\) is the solution of problem (2.2). □
Lemma 3.4
LetAbe anM-matrix and\(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Let\(\{x^{k}\}\)and\(\{P^{k}\}\)be generated by Algorithm3.1. Then
is a diagonal matrix with nonnegative diagonals. By Lemma 3.1, \((I-P^{k}+P^{k}A+\Gamma^{k})^{-1}\geq0\), and hence by (3.10), we get \(P^{k}(x^{k+1}-\phi)-P^{k-1}(x^{k}-\phi)\geq0\), that is, (3.6) holds.
If \(p_{i}^{k}=1\), \(x^{k}\ge\phi_{i}\), and by (3.6), we have
This implies \(x_{i}^{k+1}\geq\phi_{i}\) and hence \(p_{i}^{k+1}=1=p_{i}^{k}\). If \(p_{i}^{k}=0\), we immediately have \(p_{i}^{k+1}\geq0=p_{i}^{k}\). Therefore, we conclude (3.7). □
Theorem 3.1
LetAbe anM-matrix and\(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm3.1is well defined and stops at the solution of problem (1.2) in a finite number of iterations.
Proof
Lemma 3.2 implies that the algorithms are well defined. By Lemma 3.4 and Algorithm 3.1, we have \(I\geq P^{k+1}\geq P^{k}\geq\cdots\geq P^{0}=O\). Obviously, we have \(P^{k+1}=P^{k}\) for some \(k< n\). In this case, \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\). By Lemma 3.3, we obtain the solution of the problem by Algorithm 3.1 within at most n steps. □
Theorem 3.1 indicates that Algorithms 3.1 can obtain the solution of problem (1.2) within at most n steps. Nevertheless, the corresponding upper bound (i.e.n) may be large, when the dimension of the system is large. However, several numerical tests presented in Section 5 show that the convergence can be reached in just a few iterations.
Then, if \(T(x):=Ax+\Psi(x)\) is an M-function, the unique solution of problem (1.2) is the maximal element of \(\underline{\mathcal{S}}\) (see, e.g., [12]). The following conclusion indicates that Algorithm 3.1 has a monotone convergence property.
Theorem 3.2
LetAbe anM-matrix and\(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions. Let\(\{x^{k}\}\)and\(\{P^{k}\}\)be generated by Algorithm3.1. Assume
which implies \(\min\{y^{k+1}-\phi,Ay^{k+1}+\Psi(y^{k+1})\}\leq0 \) and \(y^{k+1}\in \underline{\mathcal{S}}\). □
In Theorem 3.1, we assume that A is an M-matrix. This assumption can be replaced by the following conditions:
$$ \operatorname{null}\bigl(A^{T}\bigr)\equiv\operatorname{span}(v),\quad\quad \mbox{null}(A)\equiv\operatorname{span}(w)\quad \mbox{and} \quad A+\Gamma\mbox{ is an }M\mbox{-matrix} $$
(3.11)
for some \(v>0 \) and \(w>0\) (componentwise), and for any diagonal matrix Γ satisfying \(\Gamma\geq0\) and \(\Gamma\neq O\). This case may occur for elliptic problems with Neumann boundary condition. Let further \(v^{T}\Psi(\phi)\geq0\). Similar to Lemma 3.1, for any nonnegative diagonal matrix Γ, and any diagonal matrix \(P\neq I\), whose diagonals are zeros or ones, the matrices \(I- P + AP+\Gamma\) and \(I-P + PA+\Gamma\) are M-matrices, which imply \((I- P + AP+\Gamma)^{-1}\geq0\) and \((I- P + PA+\Gamma)^{-1}\geq0\) (see, e.g., [24]). According to the proof in Lemma 3.4, for \(P^{k}\neq I\), problem (3.1) is well defined and (3.6) and (3.7) hold. If \(P^{k}=I\neq P^{k-1}\), we have \(x^{k}\geq\phi\) and then \(P^{k-1}(x^{k}-\phi)+\phi\geq\phi\). By (3.1), we have
which implies \((P^{k}-P^{k-1})(x^{k}-\phi)=0\), where the second inequality is from \(v>0\), \(\operatorname{null}(A^{T})\equiv\operatorname{span}(v)\) and the monotonicity of Ψ. Therefore, Algorithm 3.1 stops at kth iteration if \(P^{k}=I\). Consequently, we have the following convergence result.
Theorem 3.3
LetAsatisfy (3.11) and\(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions with\(v^{T}\Psi(\phi)\geq0\). Then Algorithm3.1is well defined. Let\(\{x^{k}\}\)and\(\{P^{k}\}\)be generated by Algorithm3.1, and let
where \(A=(a_{ij})\) is a given matrix, \(\phi=(\phi_{i})\in R^{n}\) is a given vector, and \(\Psi:R^{n}\to R^{n}\) is a given diagonal mapping defined as before.
For any given vector \(x\in R^{n}\), let the diagonal function \(P_{\phi}(x)\) be defined as
where\(P_{\phi}(x)\)is defined by (4.2). Then\(y=P_{\phi}(x)(x-\phi)+\phi=\min\{x,\phi\}\)is the solution of the almost linear upper obstacle problem (4.1).
According to Lemma 4.1, a semi-iterative algorithm for solving upper obstacle problem (4.1) can be constructed as follows.
Algorithm 4.1
Let \(P^{0}=O\). Set \(k:=0\).
Step 1: Solve the system finding \(x^{k+1}\), such that
where \(P^{k}=P_{\phi}(x^{k}) \) is defined by (4.2).
Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\phi)=0\), let \(y=P^{k}(x^{k+1}-\phi)+\phi=\min\{x^{k+1},\phi\}\) and stop. Otherwise, go to Step 3.
Step 3: Set \(k:=k+1\) and go to Step 1.
Similar to the proofs of Lemmas 3.1-3.3 as well as Theorems 3.1 and 3.2, we have the following finite convergence result.
Theorem 4.1
LetAbe anM-matrix and\(\Psi_{i}\) (\(i=1,2,\ldots,n\)) be monotonically nondecreased functions. Then Algorithm4.1is well defined. Moreover, let\(\{x^{k}\}\)and\(\{P^{k}\}\)be generated by Algorithm4.1. Then
Therefore, Algorithm3.1stops at the solution of problem (4.1) in at mostniterations.
In the following, we consider the numerical solution of the almost linear mixed complementarity problem of finding an \(x\in R^{n}\) such that (see, e.g., [28])
where\(P_{\varphi}(x)\)is defined by (4.5) andφsatisfies (4.4). Then\(y=P_{\varphi}(x)(x-\varphi)+\varphi\)is the solution of the almost linear mixed complementarity problem (4.3).
According to Lemma 4.2, we construct the following algorithm for the solution of mixed complementarity problem (4.3).
where \(P^{k}=P_{\varphi}(x^{k}) \) is defined by (4.3) and φ satisfies (4.4).
Step 2: If \((P^{k+1}-P^{k})(x^{k+1}-\varphi)=0\), let \(y=P^{k}(x^{k+1}-\varphi)+\varphi\) and stop. Otherwise, go to Step 3.
Step 3: Set \(k:=k+1\) and go to Step 1.
Similarly, we have the following conclusion.
Theorem 4.2
LetAbe anM-matrix and\(\Psi_{i}\) (\(i=1,2,\ldots, n\)) be monotonically nondecreased functions. Then Algorithm4.2is well defined. Moreover, let\(\{x^{k}\}\)and\(\{P^{k}\}\)be generated by Algorithm4.2. Then
Therefore, Algorithm4.2stops at the solution of problem (4.3) in at mostniterations.
For a matrix A satisfying (3.11), the monotone and finite steps convergence can be deduced in a similar way. We omit the details.
5 Numerical experiments
In this section, we present some numerical experiments in order to investigate the efficiency of the proposed algorithms. The programs are coded in Visual C++ 6.0 and run on a computer with 2.0 GHz CPU. We consider the following four problems.
where \(f(x,y,u)=u/(1+u)\), \(\varepsilon(x,y)\) is a local threshold consumption rate, and D is a domain in Ω. At the free boundary (\(x = s(y)\)), the concentration and its gradient vanish. We take into account only the steady-state case, i.e., \(c=0\) with the following additional data on Ω:
$$\textstyle\begin{cases} \varepsilon(x,y)=8(y-0.5)^{2},\\ \frac{\partial u}{\partial n}=0 \quad\mbox{when }y=0 \mbox{ and } y=1 \mbox{ and } s(y)=1,\\ u(0,y)=y(1-y), \end{cases} $$
with \(x=s(y)\) being the free boundary. We shall be interested in a nonnegative solution of the above problem. We discretize the problem by using a five-point difference scheme with a constant mesh step size: \(h=1/(m+1)\), where m denotes the number of mesh nodes in x- or y-direction. Then Problem 1 can be transformed to problem (1.1).
and \(h=\frac{ 1}{ \sqrt{n}+1}\), \(D(x)=(D_{i}):R^{n}\rightarrow R^{n}\) is a given diagonal mapping with \(D_{i}:R\rightarrow R\), for \(i=1,2,\ldots,n\), that is, the component \(D_{i}\) of D is a function of the ith variable \(x_{i}\) only. Set \(D_{i}(x_{i})=\lambda e^{x_{i}}\) and obtain a diagonal mapping \(D(x)=(D_{i}(x_{i}))\). In our test, we fix \(\lambda=0.8\), and let \(f_{i}=\max\{ 0,v_{i}-0.5\}\times10^{w_{i}-0.5}\), where \(w_{i}\) and \(v_{i}\) are random numbers in \([0,1]\), \(i=1,2,\ldots, n\).
We discuss the following nonlinear complementarity problem:
$$x\ge0,\qquad F(x)\ge0,\qquad x^{T}F(x)=0, $$
where
$$F(x)=Mx+D(x)+f. $$
The matrix \(M=A^{T}A+B\) is produced as follows. Let A be an \(n\times n\) matrix whose entries are randomly generalized in the interval \((-5,5)\) and the skew-symmetric matrix B be generated in the same way. Let the vector f be generated from a uniform distribution in the interval \((-200,300)\). Let \(D(x)=(D_{i}(x_{i})):R^{n}\rightarrow R^{n}\) be a given diagonal mapping with \(D_{i}(x_{i})=d_{i}*\arctan(x_{i})\), \(i=1,2,\ldots, n\), where \(d_{i}\) (\(i=1,2,\ldots, n\)) are chosen randomly in \((0,1)\).
We discretize the problem by using a finite difference scheme with a constant mesh step size: \(h=1/m\), where m denotes the number of mesh nodes in x- or y-direction. Then Problem 4 can be transformed to a linear mixed complementarity problem.
We compare different algorithms from the point of view of the iteration numbers. Here, we consider three algorithms: the semi-iterative algorithms proposed in this paper (denoted by SIA), the semismooth equation approach proposed in [31] (denoted by SSEA), and two-level additive Schwarz method proposed in [32] (denoted by TLASM).
For SIA, we choose the initial matrix \(P^{0}=O\) for all problems and adopt Newton iteration with line search to solve the system of nonlinear equations at each iteration. The termination criterion of inner iteration is \(\|\alpha^{k} d^{k}\|_{2}\le10^{-6}\), where \(d^{k}\) is the Newton direction and \(\alpha^{k}\) is the step length. The number of inner iterations is denoted by \(\mathrm{iter}_{\mathit{inn}}\) and the iteration number of the algorithm is denoted by iter.
For SSEA proposed in [31], we choose the initial point \(x^{0}=0\), the parameters \(\epsilon=10^{-6}\), \(p=3\), \(\rho=0.5\), \(\beta=0.3\). \(H_{k}\in\partial_{B} \Phi\) is defined by the procedure proposed in Section 7 of [31].
For TLASM, we use PSOR to solve all subproblems relating to obstacle problems with the tolerance 10−4 in \(\|\cdot\|_{2}\) norm. The systems of nonlinear equations in TLASM are solved by Newton iteration, and the termination criterion is the same as that in SIA. The relaxation parameter in the relaxation iterative method is chosen as \(\omega=1.8\). In order to get a super-solution initial, we let \(x^{0}=A^{-1}e\), with \(e=(1,1,\ldots,1)^{T}\) and \(x^{0}=-A^{-1}f\), respectively, for Problems 1 and 2. In TLASM, \(\mathrm{iter}_{\mathit{inn}}\) and iter are the same as that in SIA.
It is easy to see from Tables 1-3 that the algorithms presented in this paper are competitive as regards the above three algorithms in most cases. As for TLASM, we need to find a super-solution initial of the problem, which may usually bring about some numerical difficulties. While compared with TLASM, it is much easier for SIA to choose a suitable initial point. Another advantage of SIA is that the subproblems in the algorithm are systems of almost linear equations of dimensions less than n, while the subproblems in two-level additive Schwarz method still include complementarity problems. In Table 3, we do not list the results for two-level additive Schwarz method. The reason is that it is difficult for Problem 3 to find a super-solution initial point \(x^{0}\) and the algorithm may not converge. Theoretically, we may not guarantee the convergence of SIA for Problem 3 since A is not an M-matrix. However, it is interested to see from Table 3 that SIA is still stable and effective since the iteration numbers are small and show almost no change with the increase of the dimension.
As for SSEA and TLASM, some modifications are needed in order to solve the mixed complementarity problems. So, we only run Algorithm 4.1 for Problem 4 and the iteration numbers are listed in Table 4 to solve the mixed complementarity problems with difference dimensions discretized by Problem 4. It is easy to see that Algorithm 4.1 is still effective for the problem.
In this paper, we present some finite algorithms to the numerical solution of almost linear (mixed) complementarity problems. The algorithms are based on the systems of piecewise almost linear equations, which are the reformulations of the problems. It is proved that the algorithms converge monotonically to the solution of the problem in a finite number of steps. Indeed, we can produce a monotone lower solution or upper solution by iterates converging to the solutions of the problems by the algorithms. Numerical results show that the proposed method is effective.
Acknowledgements
The work was supported by the NSF of China grant 11271069, by Training program for outstanding young teachers in Guangdong Province (Grant No. 20140202), and by Educational Commission of Guangdong Province, China (Grant No. 2014KQNCX210).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors jointly worked on the results and they read and approved the final manuscript.