Generalized Hopfield network based structural optimization using sequential unconstrained minimization technique with additional penalty strategy

https://doi.org/10.1016/S0965-9978(02)00060-1Get rights and content

Abstract

This paper presents and examines a neuron-like framework of the generalized Hopfield network (GHN) that is capable to solve nonlinear engineering optimization problems with mixed discrete, integer and real continuous variables. The sequential unconstrained minimization technique (SUMT) was applied to construct the GHN for dealing with the design constraints. An additional penalty function for dealing with the discrete and integer variables was then imposed on the formulation of SUMT to construct an energy function of GHN for formulating the neuron-like dynamical system. The numerical solution process for such a dynamic system is simply solving a set of simultaneous first-order ordinary differential equations (ODE) that is the main feature of this optimization method. The experimental examples showed the presenting strategy is reliable. The suitable values or the adaptation technique for some parameters in computation was discussed in the paper. The presenting strategy indeed provides an alternative way of handling the engineering optimization dynamically and expands the usage of ODE. An asymmetrical three-bar truss design, a reinforced concrete beam design and a 10-bar structural design are contributed to illustrate the presenting neuron-like network method.

Introduction

The solution of nonlinear, constrained with mixed discrete, integer and continuous variables problem generally constitutes a complex, more difficult and often frustrating task than that of pure real continuous variables problem. The search for new insights and effective solutions for such type problems remains an active research endeavor. The earliest and conventional optimization methods belong to the category of iterative line search or gradient-based approach [1]. Engineers and designers have to learn these optimization algorithms so that they can solve the problems successfully. Although several line search methods are reliable for problems solving, however, engineers still have to learn the specific computational techniques. This paper fulfill a motivation of looking for an alternative optimization method that can solve the general nonlinear optimal design problems by well-developed and popular numerical method without learning the varying processes of optimization algorithms. As mentioned above, those iterative line search schemes can be considered as discrete-time realizations of continuous-time dynamical systems. A continuous-time dynamical system can be represented by an analog neuron-like network to process simultaneously a large number of variables. To formulate an optimization problem in terms of artificial neural network (ANN), the key step is to derive a computational energy function (Lyapunov function) so that the lowest energy-state reaches to the desired final design.

Two important ANN models had been proposed for solving nonlinear programming problems. Tank and Hopfield [2] introduced the first model of ANN for linear programming problems. They showed the energy function of the network was monotonically nonincreasing with time. Kennedy and Chua [3] developed the second model based on the previous work of Chua and Lin [4]. They showed the linear programming of Tank and Hopfield is a specific case of the canonical nonlinear programming circuit of Chua and Lin with an added capacitor to describe the dynamical behavior of the circuit. This presenting paper basically adopts the integrator used in Kennedy and Chua's model to study the optimization problem of continuous-time (analog) dynamical system. At first, the original nonlinear optimization problem can be transformed to an energy function. A dynamic model then contains a set of nonlinear ordinary differential equations (ODEs) [5] derived by using the sequential unconstrained minimization technique (SUMT) [1] for continuous design variables. An additional penalty function strategy presented in this paper can be imposed on the energy function that results in a pseudo-energy function to deal with the discrete or/and integer variables. This pseudo-energy function can thus develop a system of dynamical ODEs and can be solved consequently. In the following sections, the practical algorithm for dealing with the mixed variable problems has been presented in sequence. The suitable values or the adaptation technique for some necessary parameters in the computational process as well as the solution algorithm have been discussed and given in the paper.

Section snippets

Generalized Hopfield networks of analog processors

Hopfield [6] had introduced the neural network computation in optimization at 1984. The linear Hopfield network was presented to the solution of combinatorial optimization. The constitutive dynamics move the network to a steady state that corresponds to a local extreme status of the system's Lyapunov function. Tsirukis and Reklaitis [5] presented the generalized Hopfield network (GHN) that is capable to deal with the general nonlinear optimization problem by adopting suitable optimization

Constitutive equations of SUMT network

A standard mathematical formulation of optimization problem contains equality and inequality constraints for minimizing a cost function f(x)is stated asFindx=[x1,x2,…,xn]TMinimizef(x)Subjecttogi(x)≤0,i=1,2,…,mhj(x)=0,j=1,2,…,lxLxxUwhere the function of gi(x) indicates the ith nonlinear inequality design constraint and hj(x) indicated the jth equality constraint. xL and xU represent the lower and upper bound of design variables, respectively. Tsirukis and Reklaitis [5] and Cichochi and

Constitutive equations of SUMT network with additional penalty

A mixed variables problem containing a vector of design variables as x=[x1,x2,…,xL,…,xM,…,xN]T that contains L nonnegative discrete variables, (ML) nonnegative integer variables, and (NM) positive real continuous variables. To deal with this problem, the second penalty function is imposed on the energy function of , . The detailed description of this penalty function strategy can be found from Fu et al. [8] and author's paper [9] consisting of the selection of penalty function, associated

Algorithm of GHN based SUMT with additional penalty strategy

Utilize the previous descriptions to develop an algorithm of GHN based SUMT with additional penalty approach for mixed design variables problem is presented in the following:

  • Step 1

    Formulate the optimization problem as , , in which composes of discrete variables xd, integer variables xI, and real continuous variables xc.

  • Step 2

    Formulate the energy function E(x) corresponding to A(x,λ) of ALM strategy or Φ(x)of extended penalty strategy indicated in , , respectively.

  • Step 3

    Construct pseudo-energy functions φA(x,λ)

Example 1. An asymmetrical three-bar truss design with discrete variables

An asymmetrical three-bar truss shown in Fig. 2 borrowed from Rao's book [1], the problem is to find the areas of cross-section Ai (i=1,2,3) of each member as discrete variable with permissible values of parameters Aiσmax/P given by 0.1, 0.2, 0.3, 0.5, 0.8, 1.0, and 1.2, while the structural weight is minimized. The constrained function can be derived from the stresses induced in the members. By defining the nondimensional quantities f and xi as: f=max/ℓ, xi=Aiσmax/P (i=1,2,3,), where W is

Computational remarks and discussions

In addition to the presented three examples, several different problems have been solved by the proposed algorithm. The penalty parameter rk, learning parameter ε, and initial value of design variable are concluded as the most critical parameters of influencing the final result. The value of rk usually needs to be adjusted between 1 and 10 for different problem. However, it is not very sensitive to the final result. The presenting examples used rk=1 for GHN based ALM approach. The final rk is

Conclusions

This paper successfully presents a GHNs based SUMT with additional penalty strategy that can solve nonlinear constrained optimization problems with mixed discrete, integer and real continuous variables. An additional penalty function has imposed on the ALM function or extended interior penalty function to construct a pseudo-energy function for formulating the neuron-like dynamical system. The numerical solution process for such a dynamic system is solving simultaneously first-order ODEs without

Acknowledgements

The authors gratefully acknowledge the part of financial support of this research by the National Science Council, Taiwan, ROC under the Grant NSC 88-TPC-E-032-001.

References (14)

  • C.J. Shih

    Fuzzy and improved penalty approaches for multiobjective mixed-discrete optimization in structural system

    Comput Struct

    (1997)
  • S.S. Rao

    Engineering optimization-theory and practice

    (1996)
  • D.W. Tank et al.

    Simple neural optimization networks: an A/D converter, signal decision network, and a linear programming circuit

    IEEE Trans Circuits Syst

    (1986)
  • M.P. Kennedy et al.

    Neural networks for nonlinear programming

    IEEE Trans Circuits Syst

    (1986)
  • L.O. Chua et al.

    Nonlinear programming without computation

    IEEE Trans Circuits Syst

    (1984)
  • A.G. Tsirukis et al.

    Nonlinear optimization using generalized Hopfield networks

    Neural Comput

    (1989)
  • J.J. Hopfield

    Neurons with graded response have collective computational properties like those of two-state neurons

    Proc Natl Acad Sci USA

    (1984)
There are more references available in the full text version of this article.

Cited by (33)

  • Search and rescue optimization algorithm: A new optimization method for solving constrained engineering optimization problems

    2020, Expert Systems with Applications
    Citation Excerpt :

    Also, it can be seen that the standard deviation of the results obtained by SAR is better than the other techniques. Several methods have been utilized to solve the reinforced concrete beam design problem including GHN-ALM and GHN-EP (Shih & Yang, 2002), GA and FLC-AHGA (Yun, 2005), FA (Gandomi, et al., 2011), BB-BC and DDB-BC (Prayogo, et al., 2018). The statistical results of these methods and SAR are listed in Table 17.

  • Dynamic saliency-driven associative memories based on network potential field

    2016, Pattern Recognition
    Citation Excerpt :

    However, the storage capacity of the Hopfield network is known to be very limited [7] and problems of stability exist. To overcome these problems, extensive works have been done by generalizing the original Hopfield network and Hebbian rule [8–10]. Apart from the classical Hopfield network, other types of RNNs were also designed as AMs [11–15].

  • Stability analysis for delayed high-order type of Hopfield neural networks with impulses

    2015, Neurocomputing
    Citation Excerpt :

    Hopfield neural networks have been extensively studied and developed in recent years (see [3,10,32]), and there has been considerable attention in the literature on Hopfield neural networks with time delays [2,4,5,22,27,43,44]. Continuous-time and discrete-time Hopfield-type neural networks (HNN) have been applied to model identification, optimization, etc. (see [8,11,18,20,25,26,28,33,35–37,39,40]). On the other hand, there are many impulsive phenomena in biological systems, economics systems, control systems, telecommunication systems and engineering applications, etc., which can be well described by impulsive systems (see [12,14,23]).

  • Structural Optimization Using Krill Herd Algorithm

    2013, Swarm Intelligence and Bio-Inspired Computation: Theory and Applications
View all citing articles on Scopus
View full text