Elsevier

Neurocomputing

Volume 67, August 2005, Pages 456-463
Neurocomputing

Letter
A method to improve the transiently chaotic neural network

https://doi.org/10.1016/j.neucom.2004.12.004Get rights and content

Abstract

In this article, we propose a method for improving the transiently chaotic neural network (TCNN) by introducing several time-dependent parameters. This method allows the network to have rich chaotic dynamics in its initial stage and to reach a state in which all neurons are stable soon after the last bifurcation. This enables the network to have rich search ability initially and to use less CPU time to reach a stable state. The simulation results on the N-queen problem confirm that this method effectively improves both the solution quality and convergence speed of TCNN.

Introduction

Neural networks have been shown to be powerful tools for solving combinatorial optimization problems, particularly NP-hard problems. The Hopfield neural network [4], [5], one of the well-known models of this type, converges to a stable equilibrium point due to its gradient descent dynamics; however, it causes severe local-minimum problems whenever it is applied to optimization problems. Although many methods have been suggested that attempt to improve them [8], [13], the results have not always been satisfactory. Recently, many artificial neural networks with chaotic dynamics have been investigated [1], [10]; they have much rich and far-from equilibrium dynamics with various coexisting attractors, not only of fixed and periodic points but also of strange attractors, even though its equation is simple. But it is usually difficult to decide how long to maintain chaotic dynamics, or how to harness chaotic behavior so that the network converges to a stable point of equilibrium corresponding to an acceptably near-optimal state [2]. Wishing to reconcile the Hopfield network's convergent dynamics with the principles of chaotic dynamics, Chen and Aihara have proposed a transiently chaotic neural network (TCNN) [2]. However, there are many parameters in TCNN that affect its convergence speed and its solution quality [2], and these therefore need to be chosen carefully. Furthermore, a network with rich dynamics usually requires more steps to stabilize. In this article, we present a method of introducing several time-independent parameters into the original TCNN model. This modified TCNN has relatively rich dynamics initially, and, after bifurcation, converges sooner to a state in which all its neurons are stable.

Section snippets

The transiently chaotic neural network

Chen and Aihara's TCNN model is defined as follows:vi(t)=11+e-ui(t)/ε,ui(t+1)=kui(t)+α(j=1,jinwijvj+Ii)-zi(t)(vi(t)-I0),zi(t+1)=(1-β)zi(t),where i is the index of neurons and n is the number of neurons(1⩽in), vi the output of neuron i, ui the internal state for neuron i, wij the connection weight from neuron j to neuron i, Ii the input bias of neuron i, α the positive scaling parameter for inputs, k the damping factor of the nerve membrane(0k1), zi(t) the self-feedback connection weight or

The method to improve the original model

In order to improve the convergence speed and search ability of the original TCNN, we replace α, β, and ε, constants in the original TCNN model, with three variables (α(t), β(t), and ε(t)). They are updated as follows:α(t+1)=(1+λ)α(t)ifα(t)<0.1elseα(t)=0.1,β(t+1)=(1+φ)β(t)ifβ(t)<0.2elseβ(t)=0.2,ε(t+1)=(1-η)ε(t)ifε(t)>0.001elseε(t)=0.001,where λ, φ and η are small positive constants(selected empirically).

Initially, α(0) and β(0) are set to small values. When α(t) is small, the influence of the

Simulations on the N-queen problem

In order to confirm the effectiveness of the proposed method for TCNN, we tested it on the N-queen problem. This problem involves placing N queens on an N by N chessboard in such a way that no queen is under attack. The N-queen problem has been solved with a variety of artificial neural networks [9], [12], and, more recently, with several chaotic models [6], [7], [11]. In [11], a chaotic neural network with reinforced self-feedback was proposed.

In this article, we use the formulation of the N

Conclusions

We have presented a method to improve the original TCNN for combinatorial optimization problems. This method allows a TCNN to maintain rich dynamics and converge in fewer steps after the last bifurcation to a state in which all its neurons are stable. Simulations on the N-queen problem show that the proposed model is superior to the original model and another chaotic neural network because of its optimal convergence rate and average update steps. For other combinatorial optimization problems,

Acknowledgements

We wish to thank Prof. R. Newcomb and our anonymous reviewers for their valuable suggestions.

Xinshun Xu received a B.S. degree from Shandong Normal University, Shandong, China and an M.S. degree from Shandong University, Shandong, China in 1998 and 2002, respectively. From 1998 to 2000, he was an Engineer in Shandong Provincial Education Department, Shandong, China. Now he is working toward the Ph.D. degree at Toyama University, Toyama, Japan. His main research interests are neural networks, machine learning, pattern recognition, image processing, and optimization problems.

References (13)

There are more references available in the full text version of this article.

Cited by (16)

  • Design and analysis of a novel chaotic diagonal recurrent neural network

    2015, Communications in Nonlinear Science and Numerical Simulation
    Citation Excerpt :

    In order to make the network stable quickly, several chaotic control methods have been proposed for CNN. Xu et al. [19] adopted a chaos control by introducing several time-dependent parameters for the original transiently chaotic neural network model. A chaos parameter control was designed in [20], in which the changes of the chaos parameters depend on the value of system partial energy and the differential value of the partial energy.

  • Multiperiodicity analysis and numerical simulation of discrete-time transiently chaotic non-autonomous neural networks with time-varying delays

    2010, Communications in Nonlinear Science and Numerical Simulation
    Citation Excerpt :

    For such interesting phenomenon, we can refer to our computer simulations. For transiently chaotic neural networks, many previous works [10–12] mainly focus on their chaotic structure and bifurcations which would be useful for chaos synchronization such as complete synchronization, lag synchronization [36–38]. However, multiperiodicity and convergence analysis of transiently chaotic neural networks are never reported and have some interesting attractive structure (see Figs. 1 and 3) and chaotic unstable regions (see Figs. 2 and 4) which would have an insight into chaotic synchronization.

  • Minimizing interference in satellite communications using transiently chaotic neural networks

    2009, Computers and Mathematics with Applications
    Citation Excerpt :

    Numerical experiments on the traveling salesman problem (TSP) and the maintenance scheduling problem showed that the TCNN has high efficiency for converging to globally optimal solutions. The TCNN, which is also known as chaotic simulated annealing [10], is not problem-specific but a powerful general method for addressing combinatorial optimization problems (COPs) [23–25]. With autonomous decreases of the self-feedback connection, TCNNs are more effective in solving COPs compared to the HNN.

  • A TCNN filter algorithm to maximum clique problem

    2009, Neurocomputing
    Citation Excerpt :

    Hasegawa et al. have proposed an automatic parameter tuning method [8]. Xu et al. have proposed a method of introducing several time-independent parameters into TCNN model [9]. Although these methods give some guide to determine parameter values, the difficulty of parameter tuning is still hard and novel methods are still required to support the determination of parameter values.

View all citing articles on Scopus

Xinshun Xu received a B.S. degree from Shandong Normal University, Shandong, China and an M.S. degree from Shandong University, Shandong, China in 1998 and 2002, respectively. From 1998 to 2000, he was an Engineer in Shandong Provincial Education Department, Shandong, China. Now he is working toward the Ph.D. degree at Toyama University, Toyama, Japan. His main research interests are neural networks, machine learning, pattern recognition, image processing, and optimization problems.

Zheng Tang received a B.S. degree from Zhejiang University, Zhejiang, China in 1982 and an M.S. degree and a D.E. degree from Tsinghua University, Beijing, China in 1984 and 1988, respectively. From 1988 to 1989 he was an Instructor in the Institute of Microelectronics at Tsinghua University. From 1990 to 1999, he was an Associate Professor in the Department of Electrical and Electronic Engineering, Miyazaki University, Miyazaki, Japan. In 2000, he joined Toyama University, Toyama, Japan, where he is currently a Professor in the Department of Intellectual Information Systems. His current research interests include intellectual information technology, neural networks, and optimizations.

Jiahai Wang received a B.S. degree from Gannan Teachers College, Jiangxi, China and an M.S. degree from Shandong University, Shandong, China in 1999 and 2001, respectively. Now he is working toward the Ph.D. degree at Toyama University, Toyama, Japan. His main research interests include neural networks, meta-heuristic algorithms, evolutionary computation, hybrid soft computing algorithms and their applications to various real-world combinatorial optimization problems.

View full text