Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction
Introduction
Time series prediction involves the study of the present and past behaviour of the system for the prediction of the future. Chaos theory is used to study the behaviour of dynamical systems that are highly sensitive to initial conditions such as noise and error [1], [2]. Sensitivity to the initial conditions is known as the butterfly effect which makes long-term prediction difficult. The prediction of chaotic time series has a wide range of applications such as in finance [3], signal processing [4], power load [5], weather forecast [6], hydrological prediction [7] and Sunspot prediction [8], [9], [10].
Cooperative coevolution (CC) divides a problem into subcomponents that are represented using sub-populations [11]. An important feature of cooperative coevolution is that it provides better diversity using several subcomponents than conventional evolutionary algorithms [12]. Problem decomposition in cooperative coevolution determines how a neural network is broken down and encoded as subcomponents. Cooperative coevolution has shown promising results in training neural networks [12], [13], [14], [15].
Problem decomposition is a major issue in cooperative coevolution of neural networks. The problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. There are two major approaches to neuro-evolution using cooperative coevolution which decomposes the network on the neural level and the synapse level. In synapse level problem decomposition, the neural network is decomposed to its lowest level where each weight connection (synapse) forms a subcomponent. Examples include cooperatively coevolved synapse neuro-evolution [16] and neural fuzzy networks with cultural cooperative particle swarm optimisation [17]. In neural level problem decomposition, the neurons in the network act as the reference point for the decomposition. Examples include enforced sub-populations [18], [19] and neuron-based sub-population [15].
Neuron-based sub-population [15] further breaks down the encoding scheme of enforced sub-populations [18], [19]. Neuron-based sub-population performed better than enforced sub-populations and synapse level encoding for pattern recognition problems using feedforward networks. In evolving recurrent neural networks, neuron-based sub-population showed better performance than synapse level problem decomposition for grammatical inference problems [15].
This work employs synapse and neuron level problem decomposition for training recurrent neural network for chaotic time series prediction. The synapse level employs the problem decomposition used in cooperatively coevolved synapses neuro-evolution [16], while the neuron level employs neuron-based sub-population [15]. The results are further compared with a standard evolutionary algorithm (EA) where a single population is used. These methods are used to train the Elman recurrent neural network [20] on three different problems which consists of two simulated and one real-world chaotic time series. The Lorenz and Mackey-Glass are the simulated time series, while the Sunspot is a real-world time series. The performance in terms of accuracy of the respective methods are evaluated on different neural network topologies which are given by different numbers of hidden neurons. The results are further compared with a number of computational intelligence methods from the literature.
The contribution of the paper is in the application of an existing problem decomposition method called neuron-based sub-population [15] for training recurrent neural networks on chaotic time series problems. Moreover, synapse level problem decomposition [16] is also used for comparison. This will help to determine which problem decomposition method is more suitable for time series problems.
The rest of the paper is organised as follows. A background on recurrent neural networks and cooperative coevolution framework is presented in Section 2. Section 3 presents the different problem decomposition methods for training recurrent neural networks. Section 4 presents the results and discussion and Section 5 concludes the paper with a discussion on future work.
Section snippets
Recurrent neural networks
Recurrent neural networks are dynamical systems whose next state and output depends on the present network state and input. The Elman recurrent network [20] employs the context layer which makes a copy of the hidden layer outputs in the previous time steps. They are composed of an input layer, a context layer which provides state information, a hidden layer and an output layer. Each layer contains one or more neurons which propagate information from one layer to another by computing a
Cooperative coevolution for evolving recurrent neural networks
Problem decomposition determines the size of a subcomponent and the way it is encoded. Problem decomposition is also known as the encoding scheme for training neural networks [15]. This section gives details of the neuron and synapse level encoding schemes for training recurrent neural networks.
In this paper, the neuron level encoding employs neuron-based sub-population (NSP) [15]. In NSP, each neuron in the hidden layer and the output layer is a reference point for a sub-population. Each
Experimentation, results and analysis
This section presents an experimental study of the two different problem decomposition methods in cooperative coevolution of recurrent neural networks. The neuron level (NL) and synapse level (SL) problem decomposition methods are used for training Elman recurrent networks [20] on chaotic time series problems from the literature. The results are also compared to an evolutionary algorithm (EA). Two simulated and one real-world chaotic time series problems are used. The Mackey-Glass times series
Conclusions and future work
The paper has presented an application of two different problem decomposition methods for cooperative coevolution of recurrent neural networks on chaotic time series problems. The two different problem decomposition methods in cooperative coevolution have performed better than a standard evolutionary algorithm. The results show that the given methods have been able to predict chaotic time series with good level of accuracy in comparison to some of the methods from literature. The coevolutionary
Rohitash Chandra is a Research Assistant in Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. He has recently completed his PhD in Computer Science at the same institution. He holds a MSc in Computer Science from the University of Fiji and BSc from the University of the South Pacific. His research interests in general encircle the methodologies and applications of Artificial Intelligence. More specifically, he is interested in Neural and
References (45)
- et al.
Chaotic analysis of the foreign exchange rates
Appl. Math. Comput.
(2007) - et al.
Chaotic bayesian optimal prediction method and its application in hydrological time series
Comput. Math. Appl.
(2011) - et al.
Encoding subcomponents in cooperative co-evolutionary recurrent neural networks
Neurocomputing
(2011) Finding structure in time
Cognitive Sci.
(1990)Practical method for determining the minimum embedding dimension of a scalar time series
Phys. D: Non-Linear Phenom.
(1997)- et al.
Evolutionary algorithms for the selection of time lags for time series forecasting by fuzzy inference systems
Neurocomputing
(2010) - et al.
Large scale evolutionary optimization using cooperative coevolution
Inf. Sci.
(2008) - et al.
Time series analysis using normalized PG-RBF network with regression weights
Neurocomputing
(2002) - et al.
Chaotic time series prediction with residual analysis method using hybrid Elman-Narx neural networks
Neurocomputing
(2010) - et al.
Soft-computing techniques and ARMA model for time series prediction
Neurocomputing
(2008)
Deterministic non-periodic flows
J. Atmos. Sci.
In the Wake of Chaos: Unpredictable Order in Dynamical Systems
Method to distinguish possible chaos from colored noise and to determine embedding parameters
Phys. Rev. A
The Essence of Chaos
Solar cycle forecasting: a nonlinear dynamics approach
Astron. Astrophys.
Predicting chaotic time series using neural and neurofuzzy models: a comparative study
Neural Process. Lett.
A cooperative coevolutionary approach to function optimization
Cooperative coevolution: an architecture for evolving coadapted subcomponents
Evol. Comput.
COVNET: a cooperative coevolutionary model for evolving artificial neural networks
IEEE Trans. Neural Networks
Cooperative coevolution of artificial neural network ensembles for pattern classification
IEEE Trans Evol. Comput.
Cited by (232)
Design and prediction of self-organizing interval type-2 fuzzy wavelet neural network
2024, Information SciencesTwo phase cooperative learning for supervised dimensionality reduction
2023, Pattern RecognitionPhotonic convolutional reservoir computing based on VCSEL with multiple optical injections
2023, Optics CommunicationsAutomatic design of machine learning via evolutionary computation: A survey
2023, Applied Soft Computing
Rohitash Chandra is a Research Assistant in Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. He has recently completed his PhD in Computer Science at the same institution. He holds a MSc in Computer Science from the University of Fiji and BSc from the University of the South Pacific. His research interests in general encircle the methodologies and applications of Artificial Intelligence. More specifically, he is interested in Neural and Evolutionary Computation methods such as Feedforward and Recurrent Networks, Genetic Algorithms, Cooperative Coevolution and Neuro-evolution with applications in Pattern Classification, Time Series Prediction, Control, Robot Kinematics and Environmental Informatics. He is currently working on problems in Computational Biology. Apart from his interest in science, he is actively involved in literature and has been the editor of the Blue Fog Journal. He is a poet and his third poetry collection is titled “Being at Home”, which is due to be launched in 2012. He is also the founder of the Software Foundation of Fiji.
Mengjie Zhang is Professor of Computer Science at the School of Engineering and Computer Science, Victoria University of Wellington. His research interests are in Genetic Programming, Swarm Intelligence, Data Mining and Machine Learning.