Elsevier

Physics Letters A

Volume 373, Issues 18–19, 20 April 2009, Pages 1639-1643
Physics Letters A

Zhang neural network for online solution of time-varying convex quadratic program subject to time-varying linear-equality constraints

https://doi.org/10.1016/j.physleta.2009.03.011Get rights and content

Abstract

In this Letter, by following Zhang et al.'s method, a recurrent neural network (termed as Zhang neural network, ZNN) is developed and analyzed for solving online the time-varying convex quadratic-programming problem subject to time-varying linear-equality constraints. Different from conventional gradient-based neural networks (GNN), such a ZNN model makes full use of the time-derivative information of time-varying coefficient. The resultant ZNN model is theoretically proved to have global exponential convergence to the time-varying theoretical optimal solution of the investigated time-varying convex quadratic program. Computer-simulation results further substantiate the effectiveness, efficiency and novelty of such ZNN model and method.

Introduction

The problems of quadratic optimization constrained with linear equalities are widely encountered in various applications; e.g., optimal controller design [1], power-scheduling [2], robot-arm motion planning [3], and digital signal processing [4]. In addition, nonlinear optimization problems could usually be approximated by a second-order quadratic system and then solved by a numerical quadratic-programming technique sequentially [5], [6]. It is worth noting that the minimal arithmetic operations of a quadratic-programming (QP) algorithm are normally proportional to the cube of its Hessian matrix's dimension, and consequently such numerical algorithms may be not efficient enough for large-scale online applications [4].

Being another important type of solution to optimization problems, many parallel-processing computational methods have been proposed, developed, analyzed, and implemented based on specific architectures, e.g., the neural-dynamic and analog solvers in [7], [8], [9], [10]. Such a neural-dynamic approach is now regarded as a powerful alternative to real-time computation and optimization owing to its parallel-processing distributed nature and convenience of hardware implementation [11], [12], [13], [14], [15], [16], [17], [18].

However, many numerical methods and neural-dynamic approaches have been designed intrinsically for solving quadratic-programs subject to constant coefficients and/or constraints only. Such computational algorithms may be efficient for some applications, however, for time-varying applications requiring a faster convergence to exact solutions, superior approaches might be needed or desired. Different from the conventional gradient or gradient-based methods (GNN) [9], [19], [20], in this Letter a new recurrent neural network model is proposed, analyzed and simulated for the online solution of time-varying quadratic program subject to time-varying linear-equality constraints by following Zhang et al.'s design method [15], [16], [17], [18]. For presentation convenience, the name ‘Zhang neural network (ZNN)’ is used in this Letter, which might help readers understand better its difference from the well-known gradient-based approach and its related neural-network models.

To the best of our knowledge, there is almost no other literature dealing with such specific time-varying quadratic-programs subject to time-varying linear constraints at present stage, and the main contributions of the Letter may lie in the following facts.

  • (1)

    The online solution of time-varying quadratic programs subject to time-varying constraints is investigated in this Letter, rather than conventionally-investigated static quadratic programs subject to static constraints [19], [21], [22].

  • (2)

    In this Letter, to solve the time-varying quadratic program, an implicit-dynamic ZNN model is designed based on a vector-valued indefinite error function, rather than usually-used scalar-valued nonnegative energy functions associated usually with GNN (or Hopfield-type networks).

  • (3)

    A general framework of the ZNN model is proposed for solving the quadratic program with time-varying coefficients, which is guaranteed to have global convergence to its theoretical time-varying optimal solution. By using linear and power-sigmoid activation functions, the ZNN model could then have global exponential convergence and superior convergence to the theoretical time-varying optimal solution, respectively.

  • (4)

    An illustrative verification-and-comparison example is presented, where the ZNN and GNN models are both used to solve the time-varying quadratic program. The novelty, difference and efficacy of the proposed ZNN model are thus shown evidently.

Section snippets

Problem formulation and preliminaries

Consider the following time-varying convex quadratic programming problem which is subject to the time-varying linear-equality constraints A(t)x(t)=b(t):minimizexT(t)P(t)x(t)/2+qT(t)x(t),subject toA(t)x(t)=b(t), where vector x(t)Rn at time instant t[0,+) is unknown and to be solved for in real time. In time-varying quadratic program (1), Hessian matrix P(t)Rn×n, coefficient vector q(t)Rn, coefficient matrix A(t)Rm×n (being of full row rank), and coefficient vector b(t)Rm are smoothly

Neural network solvers and comparison

In the literature, conventional gradient-based neural networks and computational algorithms have been developed to compute algebraic and optimization problems with constant coefficients [9], [19], [21], [22]. However, when applied to the time-varying situation, a much faster convergence rate is required for these gradient-based approaches as compared to the variational rate of coefficient matrices and vectors. This may thus impose very stringent restrictions on hardware realization and/or

Convergence analysis and results

In this section, three theorems about the convergence properties of ZNN model (6) are established for online solution of time-varying quadratic program (1). The analysis includes the situations of using linear or power-sigmoid activation functions. Due to space limitation, the proofs of the following theorems are omitted but can be generalized from [16], [18] by taking into account Lyapunov function candidate v=eT(t)e(t)/2 and using Lyapunov theory [27], [28].

Theorem 1

Consider smoothly time-varying

Illustrative example

In this section we show an illustrative computer-simulation example so as to demonstrate the characteristics of the neural-network convergence. For illustration and comparison purposes, both ZNN model (6) and GNN model (7) are employed to solve online such a time-varying quadratic program (3), which exploit power-sigmoid activation functions with design parameters r=2 and ζ=4.

As seen from Fig. 2, starting form randomly-generated initial state y(0)=[x1(0),x2(0),λ1(0)]T[1,1]3, state-vector y(t)

Concluding remarks

In this Letter, a Zhang neural network model is proposed, investigated and applied to time-varying quadratic-program solving. Different from conventional gradient-based neural-network methods and models, the Zhang neural network could methodologically utilize the time-derivative information of time-varying problems and thus achieve global exponential convergence to the time-varying optimal solution. Computer-simulation results via power-sigmoid activation functions have further substantiated

References (28)

  • Y. Zhang et al.

    Appl. Math. Comput.

    (2005)
  • Y. Zhang et al.

    J. Comput. Appl. Math.

    (2008)
  • N. Yildiz

    Phys. Lett. A

    (2005)
  • Y. Zhang et al.

    Phys. Lett. A

    (2002)
  • W. Allegretto et al.

    Phys. Lett. A

    (2007)
  • Y. Suemitsu et al.

    Phys. Lett. A

    (2005)
  • Y. Zhang et al.

    The link between Newton iteration for matrix inversion and Zhang neural network (ZNN)

  • Y. Zhang et al.

    IEEE Trans. Neural Networks

    (2005)
  • Y. Zhang et al.

    Electron. Lett.

    (2008)
  • T.A. Johansen et al.

    IEEE Trans. Control Syst. Tech.

    (2004)
  • N. Grudinin

    IEEE Trans. Power Syst.

    (1998)
  • Y. Zhang et al.

    IEEE Trans. Robot. Automat.

    (2002)
  • W.E. Leithead et al.

    Commun. Stat. Simul. Comput.

    (2007)
  • D.W. Tank et al.

    IEEE Trans. Circuits Syst.

    (1986)
  • Cited by (165)

    View all citing articles on Scopus
    View full text