Elsevier

Neurocomputing

Volume 86, 1 June 2012, Pages 116-123
Neurocomputing

Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction

https://doi.org/10.1016/j.neucom.2012.01.014Get rights and content

Abstract

Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature.

Introduction

Time series prediction involves the study of the present and past behaviour of the system for the prediction of the future. Chaos theory is used to study the behaviour of dynamical systems that are highly sensitive to initial conditions such as noise and error [1], [2]. Sensitivity to the initial conditions is known as the butterfly effect which makes long-term prediction difficult. The prediction of chaotic time series has a wide range of applications such as in finance [3], signal processing [4], power load [5], weather forecast [6], hydrological prediction [7] and Sunspot prediction [8], [9], [10].

Cooperative coevolution (CC) divides a problem into subcomponents that are represented using sub-populations [11]. An important feature of cooperative coevolution is that it provides better diversity using several subcomponents than conventional evolutionary algorithms [12]. Problem decomposition in cooperative coevolution determines how a neural network is broken down and encoded as subcomponents. Cooperative coevolution has shown promising results in training neural networks [12], [13], [14], [15].

Problem decomposition is a major issue in cooperative coevolution of neural networks. The problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. There are two major approaches to neuro-evolution using cooperative coevolution which decomposes the network on the neural level and the synapse level. In synapse level problem decomposition, the neural network is decomposed to its lowest level where each weight connection (synapse) forms a subcomponent. Examples include cooperatively coevolved synapse neuro-evolution [16] and neural fuzzy networks with cultural cooperative particle swarm optimisation [17]. In neural level problem decomposition, the neurons in the network act as the reference point for the decomposition. Examples include enforced sub-populations [18], [19] and neuron-based sub-population [15].

Neuron-based sub-population [15] further breaks down the encoding scheme of enforced sub-populations [18], [19]. Neuron-based sub-population performed better than enforced sub-populations and synapse level encoding for pattern recognition problems using feedforward networks. In evolving recurrent neural networks, neuron-based sub-population showed better performance than synapse level problem decomposition for grammatical inference problems [15].

This work employs synapse and neuron level problem decomposition for training recurrent neural network for chaotic time series prediction. The synapse level employs the problem decomposition used in cooperatively coevolved synapses neuro-evolution [16], while the neuron level employs neuron-based sub-population [15]. The results are further compared with a standard evolutionary algorithm (EA) where a single population is used. These methods are used to train the Elman recurrent neural network [20] on three different problems which consists of two simulated and one real-world chaotic time series. The Lorenz and Mackey-Glass are the simulated time series, while the Sunspot is a real-world time series. The performance in terms of accuracy of the respective methods are evaluated on different neural network topologies which are given by different numbers of hidden neurons. The results are further compared with a number of computational intelligence methods from the literature.

The contribution of the paper is in the application of an existing problem decomposition method called neuron-based sub-population [15] for training recurrent neural networks on chaotic time series problems. Moreover, synapse level problem decomposition [16] is also used for comparison. This will help to determine which problem decomposition method is more suitable for time series problems.

The rest of the paper is organised as follows. A background on recurrent neural networks and cooperative coevolution framework is presented in Section 2. Section 3 presents the different problem decomposition methods for training recurrent neural networks. Section 4 presents the results and discussion and Section 5 concludes the paper with a discussion on future work.

Section snippets

Recurrent neural networks

Recurrent neural networks are dynamical systems whose next state and output depends on the present network state and input. The Elman recurrent network [20] employs the context layer which makes a copy of the hidden layer outputs in the previous time steps. They are composed of an input layer, a context layer which provides state information, a hidden layer and an output layer. Each layer contains one or more neurons which propagate information from one layer to another by computing a

Cooperative coevolution for evolving recurrent neural networks

Problem decomposition determines the size of a subcomponent and the way it is encoded. Problem decomposition is also known as the encoding scheme for training neural networks [15]. This section gives details of the neuron and synapse level encoding schemes for training recurrent neural networks.

In this paper, the neuron level encoding employs neuron-based sub-population (NSP) [15]. In NSP, each neuron in the hidden layer and the output layer is a reference point for a sub-population. Each

Experimentation, results and analysis

This section presents an experimental study of the two different problem decomposition methods in cooperative coevolution of recurrent neural networks. The neuron level (NL) and synapse level (SL) problem decomposition methods are used for training Elman recurrent networks [20] on chaotic time series problems from the literature. The results are also compared to an evolutionary algorithm (EA). Two simulated and one real-world chaotic time series problems are used. The Mackey-Glass times series

Conclusions and future work

The paper has presented an application of two different problem decomposition methods for cooperative coevolution of recurrent neural networks on chaotic time series problems. The two different problem decomposition methods in cooperative coevolution have performed better than a standard evolutionary algorithm. The results show that the given methods have been able to predict chaotic time series with good level of accuracy in comparison to some of the methods from literature. The coevolutionary

Rohitash Chandra is a Research Assistant in Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. He has recently completed his PhD in Computer Science at the same institution. He holds a MSc in Computer Science from the University of Fiji and BSc from the University of the South Pacific. His research interests in general encircle the methodologies and applications of Artificial Intelligence. More specifically, he is interested in Neural and

References (45)

  • E. Lorenz

    Deterministic non-periodic flows

    J. Atmos. Sci.

    (1963)
  • H.K. Stephen

    In the Wake of Chaos: Unpredictable Order in Dynamical Systems

    (1993)
  • M.B. Kennel et al.

    Method to distinguish possible chaos from colored noise and to determine embedding parameters

    Phys. Rev. A

    (1992)
  • S. Kawauchi, H. Sugihara, H. Sasaki, Development of very-short-term load forecasting based on chaos theory, Electr....
  • E. Lorenz

    The Essence of Chaos

    (1993)
  • T. Koskela, M. Lehtokangas, J. Saarinen, K. Kaski, Time series prediction with multilayer perceptron, Proceedings of...
  • S. Sello

    Solar cycle forecasting: a nonlinear dynamics approach

    Astron. Astrophys.

    (2001)
  • A. Gholipour et al.

    Predicting chaotic time series using neural and neurofuzzy models: a comparative study

    Neural Process. Lett.

    (2006)
  • M.A. Potter et al.

    A cooperative coevolutionary approach to function optimization

  • M.A. Potter et al.

    Cooperative coevolution: an architecture for evolving coadapted subcomponents

    Evol. Comput.

    (2000)
  • N. Garcia-Pedrajas et al.

    COVNET: a cooperative coevolutionary model for evolving artificial neural networks

    IEEE Trans. Neural Networks

    (2003)
  • N. Garcia-Pedrajas et al.

    Cooperative coevolution of artificial neural network ensembles for pattern classification

    IEEE Trans Evol. Comput.

    (2005)
  • Cited by (232)

    View all citing articles on Scopus

    Rohitash Chandra is a Research Assistant in Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. He has recently completed his PhD in Computer Science at the same institution. He holds a MSc in Computer Science from the University of Fiji and BSc from the University of the South Pacific. His research interests in general encircle the methodologies and applications of Artificial Intelligence. More specifically, he is interested in Neural and Evolutionary Computation methods such as Feedforward and Recurrent Networks, Genetic Algorithms, Cooperative Coevolution and Neuro-evolution with applications in Pattern Classification, Time Series Prediction, Control, Robot Kinematics and Environmental Informatics. He is currently working on problems in Computational Biology. Apart from his interest in science, he is actively involved in literature and has been the editor of the Blue Fog Journal. He is a poet and his third poetry collection is titled “Being at Home”, which is due to be launched in 2012. He is also the founder of the Software Foundation of Fiji.

    Mengjie Zhang is Professor of Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. His research interests are in Genetic Programming, Swarm Intelligence, Data Mining and Machine Learning.

    View full text