In this paper, we are interested in solving nonlinear large scale unconstrained optimization problems of the form
$$ \min f(x), \quad x\in\Re^{n}, $$
(1)
where
\(f:\Re^{n}\rightarrow\Re\) is an at least twice continuously differentiable function. A nonlinear conjugate gradient method is an iterative scheme that generates a sequence
\(\{x_{k}\}\) of an approximation to the solution of (
1), using the recurrence
$$ x_{k+1}=x_{k}+\alpha_{k} d_{k} , \quad k=0,1,2,3,\ldots, $$
(2)
where
\(\alpha_{k}>0\) is the steplength determined by a line search strategy which either minimizes the function or reduces it sufficiently along the search direction and
\(d_{k}\) is the search direction defined by
$$d_{k}= \textstyle\begin{cases} -g_{k}; & k=0, \\ -g_{k} +\beta_{k} d_{k-1}; & k\geq1, \end{cases} $$
where
\(g_{k}\) is the gradient of
f at a point
\(x_{k}\) and
\(\beta_{k}\) is a scalar known as the conjugate gradient parameter. For example, Fletcher and Reeves (FR) [
1], Polak-Ribiere-Polyak (PRP) [
2], Liu and Storey (LS) [
3], Hestenes and Stiefel (HS) [
4], Dai and Yuan (DY) [
5] and Fletcher (CD) [
6] used an update parameter, respectively, given by
$$\begin{aligned} & \beta^{\mathrm{FR}}_{k}=\frac{g_{k}^{T}g_{k}}{g_{k-1}^{T}g_{k-1}}, \qquad \beta^{\mathrm{PRP}}_{k}=\frac{g_{k}^{T}y_{k-1}}{g_{k-1}^{T}g_{k-1}}, \qquad \beta^{\mathrm{LS}}_{k}=\frac{-g_{k}^{T}y_{k-1}}{d_{k-1}^{T}g_{k-1}}, \\ & \beta^{\mathrm{HS}}_{k}=\frac{g_{k}^{T}y_{k-1}}{d_{k-1}^{T}y_{k-1}}, \qquad \beta^{\mathrm{DY}}_{k}=\frac{g_{k}^{T}g_{k}}{d_{k-1}^{T}y_{k-1}}, \qquad \beta^{\mathrm{CD}}_{k}=-\frac{g_{k}^{T}g_{k}}{d_{k-1}^{T}y_{k-1}}, \end{aligned}$$
where
\(y_{k-1}=g_{k}-g_{k-1}\). If the objective function is quadratic, with an exact line search the performances of these methods are equivalent. For a nonlinear objective function different
\(\beta_{k}\) lead to a different performance in practice. Over the years, after the practical convergence result of Al-Baali [
7] and later of Gilbert and Nocedal [
8] attention of researchers has been on developing on conjugate gradient methods that possesses the sufficient descent condition
$$ g_{k}^{T} d_{k}\leq-c\Vert g_{k}\Vert ^{2}, $$
(3)
for some constant
\(c> 0\). For instance the CG-DESCENT of Hager and Zhang [
9]
$$ \beta^{\mathrm{HZ}}_{k}=\max \bigl\{ \beta^{N}_{k},\eta_{k} \bigr\} , $$
(4)
where
$$\beta^{N}_{k}=\frac{1}{d^{T}_{k-1}y_{k-1}} \biggl(y_{k-1}-2d_{k-1} \frac {\Vert y_{k-1}\Vert ^{2}}{d^{T}_{k-1}y_{k-1}} \biggr)^{T} g_{k} $$
and
$$\eta_{k}=\frac{-1}{\Vert d_{k-1}\Vert \min \{\Vert g_{k-1}\Vert ,\eta \}}, $$
which is based on the modification of HS method. Another important class of conjugate gradient methods is the so-called three-term conjugate gradient method in which the search direction is determined as a linear combination of
\(g_{k}\),
\(s_{k}\), and
\(y_{k}\) as
$$ d_{k}=-g_{k} -\tau_{1} s_{k}+ \tau_{2} y_{k}, $$
(5)
where
\(\tau_{1}\) and
\(\tau_{2}\) are scalar. Among the generated three-term conjugate gradient methods in the literature we have the three-term conjugate methods proposed by Zhang
et al. [
10,
11] by considering a descent modified PRP and also a descent modified HS conjugate gradient method as
$$d_{k+1}=-g_{k+1}+ \biggl(\frac{g_{k+1}^{T}y_{k}}{g_{k}^{T}g_{k}} \biggr)d_{k}- \biggl(\frac{g_{k+1}^{T} d_{k}}{g_{k}^{T}g_{k}} \biggr)y_{k} , $$
and
$$d_{k+1}=-g_{k+1}+ \biggl(\frac{g_{k+1}^{T}y_{k}}{s_{k}^{T}y_{k}} \biggr)s_{k}- \biggl(\frac{g_{k+1}^{T}s_{k}}{s_{k}^{T}y_{k}} \biggr)y_{k}, $$
where
\(s_{k}=x_{k+1}-x_{k}\). An attractive property of these methods is that at each iteration, the search direction satisfies the descent condition, namely
\(g_{k}^{T} d_{k}= -c\Vert g_{k}\Vert ^{2}\) for some constant
\(c> 0\). In the same manner, Andrei [
12] considers the development of a three-term conjugate gradient method from the BFGS updating scheme of the inverse Hessian approximation restarted as an identity matrix at every iteration where the search direction is given by
$$d_{k+1}=-g_{k+1}+\frac{y^{T}_{k} g_{k+1}}{y_{k}^{T}s_{k}}- \biggl(y-2 \frac{\Vert y_{k}\Vert ^{2}}{y_{k}^{T}s_{k}} \biggr)^{T}\frac{s_{k}^{T} g_{k+1}}{y_{k}^{T}s_{k}}s_{k}- \biggl(\frac{s_{k}^{T} g_{k+1}}{y_{k}^{T}s_{k}} \biggr)y_{k}. $$
An interesting feature of this method is that both the sufficient and the conjugacy conditions are satisfied and we have global convergence for a uniformly convex function. Motivated by the good performance of the three-term conjugate gradient method, we are interested in developing a three-term conjugate gradient method which satisfies both the sufficient descent condition, the conjugacy condition, and global convergence. The remaining part of this paper is structured as follows: Section
2 deals with the derivation of the proposed method. In Section
3, we present the global convergence properties. The numerical results and discussion are reported in Section
4. Finally, a concluding remark is given in the last section.