Elsevier

Neurocomputing

Volume 74, Issue 10, May 2011, Pages 1710-1719
Neurocomputing

Performance analysis of gradient neural network exploited for online time-varying quadratic minimization and equality-constrained quadratic programming

https://doi.org/10.1016/j.neucom.2011.02.007Get rights and content

Abstract

In this paper, the performance of a gradient neural network (GNN), which was designed intrinsically for solving static problems, is investigated, analyzed and simulated in the situation of time-varying coefficients. It is theoretically proved that the gradient neural network for online solution of time-varying quadratic minimization (QM) and quadratic programming (QP) problems could only approximately approach the time-varying theoretical solution, instead of converging exactly. That is, the steady-state error between the GNN solution and the theoretical solution can not decrease to zero. In order to understand the situation better, the upper bound of such an error is estimated firstly, and then the global exponential convergence rate is investigated for such a GNN when approaching an error bound. Computer-simulation results, including those based on a six-link robot manipulator, further substantiate the performance analysis of the GNN exploited to solve online time-varying QM and QP problems.

Introduction

As an important branch of mathematical optimization, quadratic programming (QP) constrained with linear equalities [with quadratic minimization (QM) as a special case] has been theoretically analyzed [1], [2], [3], [4] and widely applied in various scientific areas; e.g., optimal controller design [5], [6], power-scheduling [7], robot-arm motion planning [8], [9], [10], [11], and digital signal processing [12]. Due to its fundamental roles, many algorithms have been proposed to solve QP problems [3], [13], [14]. In general, numerical algorithms performed on digital computers are considered to be a well-accepted approach to QM and linear-equality constrained QP problems. However, for large-scale online or real-time applications, such numerical algorithms may not be efficient enough, since the minimal arithmetic operations are normally proportional to the cube of Hessian matrix's dimension n [12].

To solve optimization problems more efficiently, many parallel-processing computational methods have been proposed, analyzed, and implemented based on specific architectures, e.g., the neural-dynamic and analog solvers investigated in [15], [16], [17], [18], [19], [20]. Due to the in-depth research in recurrent neural networks (RNN), such a neural-dynamic approach is now regarded as a powerful alternative to real-time computation and optimization, owing to its parallel-processing distributed nature and convenience of hardware implementation [21], [22], [23], [24], [25], [26], [27], [28], [29]. It is worth noting that Chua and Lin [21] developed a canonical nonlinear programming circuit (NPC) for simulating general nonlinear programs; and that Kennedy and Chua [22] explored the stability properties of such a canonical nonlinear programming circuit model with the objective function and constraints being smooth. Forti et al. [27] introduced a generalized circuit for nonsmooth programming problems, which derives from a natural extension of NPC. Bian and Xue [28] proposed a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with the objective function being nonsmooth and nonconvex.

A number of neural-dynamic models have been proposed, which are usually based on the gradient-descent methods, or termed gradient neural networks (GNNs). For solving static problems, such GNNs can be proved to exponentially converge to theoretical optimal solutions [9], [18], [24], [30]. To lay a basis for further discussion, the design of a GNN model can generally be described as follows.

  • First, to solve a given problem via GNN, a scalar-valued norm-based nonnegative energy function ɛ(t)0R is defined. Theoretically speaking, a minimum point of the energy function ɛ(t) is achieved with ɛ(t)=0, corresponding to the situation that the neural-solution [e.g., x(t)] is the theoretical solution (e.g., x) of the given static problem.

  • Second, a computational scheme can be designed to evolve along a descent direction of the energy function ɛ(t), until the minimum point is reached. It is worth mentioning here that a typical descent direction is the negative gradient of ɛ(t), i.e., (ɛ/x).

  • Third, the conventional gradient neural network design formula can be chosen and implemented as follows: x˙(t)dx(t)dt=γɛx,where design parameter γ>0 corresponds to the reciprocal of a capacitance parameter in the analog-circuit implementation of GNN (1), which should be set as large as the hardware would permit or selected appropriately for simulative and/or experimental purposes [31], [32].

However, it is worth pointing out that such a GNN is designed intrinsically for static problems solving with constant coefficient matrices and vectors. When applied to time-varying matrix–vector problems, the GNN may generate a considerably large solution-error, as shown in [33] by Zhang et al. Facing this less-favorable phenomenon, the authors have been interested in the problems inside and have begun to investigate the performance of GNN-type solvers applied to time-varying quadratic optimization. The main contributions of this paper lie in the following facts.

  • (1)

    In this paper, the less-favorable phenomenon of GNN is pointed out formally and systematically; i.e., conventional GNN as a system can not solve exactly the time-varying quadratic optimization problems. In other words, there always exists a nonzero steady-state solution error between the GNN-solution and the time-varying theoretical solution.

  • (2)

    This paper investigates and analyzes the performance of GNN applied to time-varying QM and QP problems. Theoretical analysis is provided rigorously for estimating the steady-state solution-error bound and exponential convergence rate for GNN approaching an error bound.

  • (3)

    The gain γ is quite widely exploited in many researchers' works. However, almost all existing works may only show examples to substantiate the function of this parameter in the situation of static problems solving; and the gradient neural networks are then applied to the situation of time-varying problems solving, but without theoretical proof. In this paper, the theoretical analysis and proof are presented for gradient neural network solving time-varying quadratic optimization problems, including the theoretically proved important role of design parameter γ in the situation of time-varying quadratic optimization.

  • (4)

    Illustrative examples of online time-varying QM and QP problems solving substantiate well the theoretical results, e.g., those about the steady-state solution-error bound, the important roles of design parameter γ and solution-variation rate ζ.

  • (5)

    The application and simulation results based on a six-link robot manipulator further substantiate the performance analysis of the GNN solving time-varying QP problems.

To the best of the authors' knowledge, to date, few studies have been published on such a time-varying performance analysis in the literature at the present stage. Note that Myung and Kim [25] proposed a time-varying two-phase (TVTP) algorithm which can give exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. Their work is inspiring and interesting. Evidently, there are some substantial differences between Myung and Kim's pioneering work and this paper lying as below.

  • (1)

    Myung and Kim's work considers the time-varying optimization problem when time t goes to infinity or large enough. In contrast, this paper is to find the time-varying optimal solution at any time instant.

  • (2)

    The constraints in Myung and Kim's work are static. In contrast, the coefficients of quadratic programming investigated in this paper are time-varying, i.e., with a time-varying matrix/vector constraint.

  • (3)

    Myung and Kim's work exploits a penalty-based method for handling time-varying optimization problems. In contrast, this paper exploits time-varying linear-equation solving and Lagrange multiplier method for handling the time-varying quadratic optimization problems at any time instant t[0,).

The remainder of this paper is organized as follows. Section 2 describes the situation of using GNN for solving online time-varying QM problem. Online time-varying QP problem solving is further developed and investigated in Section 3. In Section 4, the authors analyze the performance of GNN applied to time-varying QM and QP problems solving; specifically speaking, the solution-error bound and the exponential convergence rate towards a bound are analyzed. Several illustrative examples are shown in Section 5. Section 6 further presents the application and simulation results based on a six-link planar robot manipulator. Finally, the paper is concluded with some remarks in Section 7.

Section snippets

Time-varying quadratic minimization

Consider the following time-varying quadratic minimization problem:minimizef(x)xT(t)P(t)x(t)/2+qT(t)x(t)R,where Hessian matrix P(t)Rn×n is smoothly time-varying, positive-definite and symmetric for any time instant t[0,+)R, and the coefficient vector q(t)Rn is assumed smoothly time-varying as well. In expression (2), unknown vector x(t)Rn is to be solved so as to make the value of f(x) always smallest for any time instant t[0,+).

One simple method solving the time-varying QM problem (2)

Time-varying quadratic programming

As we know, minimizing quadratic function (2) may not be enough to describe practical problems in some fields. For instance, in the application to robot-arm motion planning [8], [10], [17], [31], the consideration of the end-effector path tracking yields a time-varying QP formulation which is subject to a time-varying linear-equality constraint. In this section, let us consider the following time-varying convex quadratic programming problem which is subject to a time-varying linear-equality

Time-varying performance analysis

From the computer-simulation results of time-varying QM and QP problems solving, we clearly find the drawbacks of GNN models applied to the time-varying coefficients situation. Fig. 1, Fig. 2, Fig. 3 all indicate that, when applied to time-varying problems solving, the computational errors generated by GNN solvers can not decrease to zero. The GNN model for online minimization of quadratic function, constrained or unconstrained, could only approximately approach its time-varying theoretical

Simulative verification

As Section 4 proves, the steady-state error can not decrease to zero when the GNN model is exploited for solving time-varying QM and QP problems, and the solution could exponentially converge to a loose error bound. In this section, we present some illustrative examples to show the validity of the above theoretical results, and to show how parameters γ and ω impact on the performance of GNN exploited for solving the time-varying QM and QP problems.

Let us consider the time-varying QM problem

Application to robot tracking

As we know and experience [8], [10], [17], [31], [36], the gradient-type neural networks have been exploited in motion planning and control of redundant robots with end-effectors tracking desired trajectories. As pointed out by the authors' previous work [36], the motion planning problem of robot manipulators can be formulated as the following time-varying quadratic program subject to a time-varying linear-equality constraint:minimizexT(t)Px(t)/2+qT(t)x(t),subject toA(t)x(t)=b(t),with PI, q(t)

Conclusions

This paper has investigated the performance analysis of gradient neural network (GNN) exploited for online time-varying quadratic minimization (QM) and equality-constrained quadratic programming (QP) problems solving. As seen from the theoretical results presented in this paper, the conventional gradient-type neural network can not solve exactly the time-varying quadratic optimization problem. Computer-simulation results have substantiated the performance analysis of such a GNN model exploited

Acknowledgements

The authors would like to thank the editors and anonymous reviewers sincerely for their constructive comments and suggestions which have really improved the presentation and quality of this paper very much.

Yunong Zhang is a professor at School of Information Science and Technology, Sun Yat-sen University (SYSU), Guangzhou, China. He received his B.S., M.S., and Ph.D. degrees, respectively from Huazhong University of Science and Technology, South China University of Technology, and Chinese University of Hong Kong, in 1996, 1999, and 2003, respectively. Before joining SYSU in 2006, Yunong Zhang had been with National University of Ireland, University of Strathclyde, National University of Singapore

References (36)

  • B. Fares et al.

    Robust control via sequential semidefinite programming

    SIAM J. Control Optim.

    (2002)
  • N. Grudinin

    Reactive power optimization using successive quadratic programming method

    IEEE Trans. Power Syst.

    (1998)
  • J. Wang et al.

    Recurrent neural networks for real-time computation of inverse kinematics of redundant manipulators

  • Y. Zhang et al.

    Bi-criteria velocity minimization of robot manipulators using LVI-based primal-dual neural network and illustrated via PUMA560 robot arm

    Robotica

    (2010)
  • W.E. Leithead et al.

    O(N2)-operation approximation of covariance matrix inverse in Gaussian process regression based on quasi-Newton BFGS method

    Communications in Statistics – Simulation and Computation

    (2007)
  • W. Li et al.

    A new algorithm for solving strictly convex quadratic programs

    SIAM J. Optim.

    (1997)
  • F. Leibfritz et al.

    Inexact SQP interior point methods and large scale optimal control problems

    SIAM J. Control Optim.

    (1999)
  • D.W. Tank et al.

    Simple “neural” optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit

    IEEE Trans. Circuits Syst.

    (1986)
  • Cited by (51)

    • General 9-instant discrete-time Zhang neural network for time-dependent applications

      2022, Journal of the Franklin Institute
      Citation Excerpt :

      Gradient neural network (GNN) is a special class of the Hopfield network, which is designed based on the gradient-decent method in optimization [14,15]. Although GNN is a traditional and satisfactory approach to solving time-independent problems, researches show that GNN can only approximately approach the theoretical solution of a time-dependent problem, instead of converging exactly [16,17]. Zhang neural network (ZNN, or termed, Zhang neural dynamics), also originating from the Hopfield network, has been developed since 2001 [18,19].

    View all citing articles on Scopus

    Yunong Zhang is a professor at School of Information Science and Technology, Sun Yat-sen University (SYSU), Guangzhou, China. He received his B.S., M.S., and Ph.D. degrees, respectively from Huazhong University of Science and Technology, South China University of Technology, and Chinese University of Hong Kong, in 1996, 1999, and 2003, respectively. Before joining SYSU in 2006, Yunong Zhang had been with National University of Ireland, University of Strathclyde, National University of Singapore since 2003. His main research interests include neural networks, robotics, and scientific computing.

    Yiwen Yang was born in Guangdong, China, in 1986. He received the B.S. degree in Software Engineering, Sun Yat-sen University (SYSU), Guangzhou, China, in 2009. He is currently pursuing the M.S. degree in Communication and Information Systems at School of Information Science and Technology, Sun Yat-sen University. His research interests include neural networks, nonlinear systems, machine learning, and robotics.

    Gongqin Ruan was born in Guangdong, China, in 1985. He received the B.S. degree in Electronic Information Engineering, Guangdong University of Technology (GDUT), China, in 2008. He received the M.S. degree in Communication and Information Systems at School of Information Science and Technology, Sun Yat-sen University (SYSU), Guangzhou, China, in 2010. His current research interests include neural networks, nonlinear systems, and intelligent information.

    This work is supported by the National Natural Science Foundation of China under Grants 61075121 and 60935001, and also by the Fundamental Research Funds for the Central Universities of China.

    View full text