Skip to main content

2002 | Buch

Modelling and Forecasting Financial Data

Techniques of Nonlinear Dynamics

herausgegeben von: Abdol S. Soofi, Liangyue Cao

Verlag: Springer US

Buchreihe : Studies in Computational Finance

insite
SUCHEN

Über dieses Buch

Modelling and Forecasting Financial Data brings together a coherent and accessible set of chapters on recent research results on this topic. To make such methods readily useful in practice, the contributors to this volume have agreed to make available to readers upon request all computer programs used to implement the methods discussed in their respective chapters.
Modelling and Forecasting Financial Data is a valuable resource for researchers and graduate students studying complex systems in finance, biology, and physics, as well as those applying such methods to nonlinear time series analysis and signal processing.

Inhaltsverzeichnis

Frontmatter

Introduction

Introduction
Abstract
Economists have always, and at times successfully, emulated natural sciences, particularly physics and biology. There is a long tradition in doing so which goes back to the beginning of the 18th century. The of economic thought which originated in France, believed that like natural phenomena, societies are also governed by the laws of nature. Hence, Turgot, a leading physiocrat borrowed the term ‘equilibrium’ from physics and gave a mechanical analogy in describing relative values of commodities by stating, “the unique and simple laws, founded on nature itself, in consequence of which all values that exist in trade are held in balance and settle themselves at determinate values, just as the bodies given over to their weight arrange themselves in the order of their specific gravity.” (As quoted in Speigel, 1971).
Abdol S. Soofi, Liangyue Cao

Embedding Theory: Time-Delay Phase Space Reconstruction and Detection of Nonlinear Dynamics

Frontmatter
Chapter 1. Embedding Theory: Introduction and Applications to Time Series Analysis
Abstract
The fact that even though when we will not know the equations defining an underlying dynamical system and we are not able to measure all state space variables, we may be able to find a one-to one correspondence between the original state space and a reconstructed space using few variables means that it is possible to identify unambiguously the original state space from measurements. This has open a new field of research: non-linear time series analysis. The objective of this Chapter is to provide the reader with an overall picture of the embedding theory and time-delay state space reconstruction techniques. We hope that this introductory chapter will guide the reader in understanding the subsequent chapters where different relevant aspects of dynamical systems theory are going to be discussed in more depth and detail.
F. Strozzi, J. M. Zaldivar
Chapter 2. Determining Minimum Embedding Dimension from Scalar Time Series
Abstract
Determining embedding dimension is considered as one of the most important steps in nonlinear time series modelling and prediction. A number of methods have been developed in determining the minimum embedding dimension since the early study of nonlinear time series analysis. Some of the methods are briefly reviewed in this chapter. The false nearest neighbor and the averaged false nearest neighbor methods are described in details, given the methods have been widely used in the literature. Several real economic time series are tested to demonstrate applications of the methods.
Liangyue Cao
Chapter 3. Mutual Information and Relevant Variables for Predictions
Abstract
In this chapter we propose a method to select from a possibly large set of observable quantities a minimal subset yielding (nearly) all relevant information on the quantity we are going to predict. We derive the theoretical background and give numerical hints and examples, including results for some daily dollar exchange rates. Our approach essentially profits from the availability of a fast algorithm for mutual information analysis.
Bernd Pompe

Methods of Nonlinear Modelling and Forecasting

Frontmatter
Chapter 4. State Space Local Linear Prediction
Abstract
Local linear prediction is one of several methods that have been applied to prediction of real time series including financial time series. The difference from global linear prediction is that, for every single point prediction, a different linear autoregressive (AR) model is estimated based only on a number of selected past scalar data segments. Geometrically, these data segments correspond to points close to the target point when the time series is viewed in a pseudo-state space with dimension equal to the order of the local AR model.
The parameters of the local linear model are typically estimated using ordinary least squares (OLS). Apart from potential linearisation errors, a drawback of this approach is the high variance of the predictions under certain conditions. It has been shown that a different set of so-called regularisation techniques, originally derived to solve ill-posed regression problems, gives more stable solutions (and thus better predictions) than OLS on noisy chaotic time series. Three regularisation techniques are considered, i.e. principal component regression (PCR), partial least squares (PLS) and ridge regression (RR). These methods reduce the variance compared to OLS, but introduce more bias. A main tool of this analysis is the Singular Value Decomposition (SVD), and a key to successful regularisation is to dampen the higher order SVD components. For the sake of completeness, truncated total least squares is discussed as well, which is designed to solve “error-in-variables” problems. Even though it would be expected that this method is more appropriate for noisy time series, it turns out to give the worst predictions.
This chapter will describe the general features of local linear prediction and particularly the OLS solution and the regularisations. The statistical properties of the methods will be highlighted and explained in the setting of local linear prediction. The superiority of the predictions using regularised solutions over OLS predictions will be demonstrated using simulated data and financial data.
D. Kugiumtzis
Chapter 5. Local Polynomial Prediction and Volatility Estimation in Financial Time Series
Abstract
Chaos and nonlinear theory has significant impact on the analysis of economic and financial time series. Nonlinearity plays an important role in explaining the empirical features of asymmetric business cycles, clustered volatility, and regime switching in finance data. In this Chapter, we will focus the popular local polynomial prediction method and its applications to chaotic time series prediction and financial volatility estimation. Volatility and conditional covariance estimation is important in many aspects of modern finance theory. We introduce a nonparametric volatility model, called local ARCH, and propose a weighted least square method for goodness of fit. The statistical theory is based on a martingale regression framework developed in Lu (1999a,b), which includes a wide variety of nonlinear time series models, such as nonlinear autoregression, ARCH, and nonlinear vector autoregressive models. The daily AOL stock data is used as an example to illustrate the developed techniques. First, we apply the nonlinear regression procedure to model the spread-volume relationship—We find a nice power-law relationship in all appropriate periods after discovering that the spurious nonlinearity in the overall data is due to nonstationarity. We also find a vastly changing structure in GARCH models fitted to different parts of the return rate series based on closing prices. We apply the developed local ARCH theory to a stationary subseries of the return series, and find some encouraging results.
Zhan-Qian Lu
Chapter 6. Kalman Filtering of Time Series Data
Abstract
We introduce the method of Kalman filtering of time series data for linear systems and its nonlinear variant the extended Kalman filter. We demonstrate how the filter can be applied to nonlinear systems and reconstructions of nonlinear systems for the purposes of noise reduction, state estimation and parameter estimation. We discuss issues such as implementation of the filter equations and choices of filter parameters within the context of reconstructing nonlinear systems from data. Several examples illustrating the use of the filter are presented inlcuding a preliminary use of the filter as applied to economic time series data.
David M. Walker
Chapter 7. Radial Basis Functions Networks
Abstract
The solution of complex mapping problems with artificial neural networks normally demands the use of a multi-layer network structure. This multi-layer topology process data into consecutive steps in each one of the layers. Radial Basis Functions networks are a particular neural network structure that uses radial functions in the intermediate, or hidden, layer. It has been shown in the literature that feed-forward neural networks, such as Multi-Layer Perceptrons and Radial Basis Functions Networks, are universal multi-variable function approximators. Since forecasting problems can be treated as a general function approximation problem in the form y(k) = f(y(k− 1), y(k− 2), …, u(k), u(k− 1), …) (uand yare, respectively, system input and output) it can be easily understood that these network structures can be directly applied to forecasting problems.
A general introduction to Radial Basis Functions is given in this chapter. Training this network structure consists on two phases: center and radius selection for the radial functions of the hidden layer and weights determination for the output linear layer. Several methods for both phases of training are presented throughout the chapter. In the end, an example of using Radial Basis Functions for non-linear time series forecasting is given, in order to show its applicability to solve complex forecasting problems.
A. Braga, A. C. Carvalho, T. Ludermir, M. de Almeida, E. Lacerda
Chapter 8. Nonlinear Prediction of Time Series Using Wavelet Network Method
Abstract
A global approximation technique, wavelet network, is introduced in this chapter to predict nonlinear time series. Applications of this technique are demonstrated by testing predictions, in particular, short-term predictions, on various artificial time series generated from chaotic systems as well as on real world economic time series. Prediction tests are also conducted on time series from a dynamical system with its parameter varying over the time (either the parameter corrupted by noise over the time or the parameter varying following a certain rule): this may often be the case in economic systems where the parameters of the systems are unlikely to remain constant over the time. Numerical results in both artificial and real time series showed the capability of the technique.
Liangyue Cao

Modelling and Predicting Multivariate and Input-Output Time Series

Frontmatter
Chapter 9. Nonlinear Modelling and Prediction of Multivariate Financial Time Series
Abstract
Nonlinear modelling and prediction are discussed along the stream of using multivariate time series or multiple time series effectively. Time-delay embedding method for multivariate time series is described. Identification of relationships between variables is explored based on multivariate time series. Prediction from multivariate time series is compared with that from univariate time series. It is found that better prediction can be achieved using multivariate time series than using univariate time series. Two sets of financial time series axe tested to demonstrate applications of using multivariate time series, where the local linear method is used in fitting all models.
Liangyue Cao
Chapter 10. Analysis of Economic Time Series Using Narmax Polynomial Models
Abstract
This chapter presents a methodology to fit nonlinear autoregressive moving average polynomial models with exogenous variables (NARMAX) to observed data. Because the models are nonlinear, it is sometimes possible to perform a more accurate analysis than if linear models were used. On the other hand, the model structure, that is, the set of independent variables has to be chosen with great care lest dynamically meaningless models be fitted to the data. An effective algorithm, that can be used to select the “best” regressors from a set of candidates, is reviewed with details and a simple version of its code is provided. The chapter has been designed to serve as a tutorial introduction to modeling and analysis using NARMAX polynomials of which linear and no-input (univariate) models are special cases. The main features of the approach are illustrated using time series of beef-cattle prices for the Brazilian State of So Paulo.
Luis Antonio Aguirre, Antonio Aguirre
Chapter 11. Modeling Dynamical Systems by Error Correction Neural Networks
Abstract
We introduce a new time delay recurrent neural network called ECNN, which includes the last model error as an additional input. Hence, the learning can interpret the models misspecification as an external shock which can be used to guide the model dynamics afterwards.
As extension to the ECNN, we present a concept called overshooting, which enforces the autoregressive part of the model and thus, allows long term forecasts. Modeling high-dimensional dynamical systems, we introduce the principle of variants-invariants separation, which simplifies the high-dimensional forecasting problem by a suitable coordinate transformation. Focusing on optimal state space reconstruction, we try to specify a transformation such that the related forecast problem becomes easier, i. e. it evolves more smoothly over time. Here, we propose an integrated neural network approach which combines state space reconstruction and forecasting.
Finally we apply the ECNN to the complete German yield curve. Our model allows a forecast of ten different interest rate maturities on forecast horizons between one and six months ahead. It turns out, that our approach is superior to more conventional forecasting techniques.
Hans-Georg Zimmermann, Ralph Neuneier, Ralph Grothmann

Problems in Modelling and Prediction

Frontmatter
Chapter 12. Surrogate Data Test on Time Series
Abstract
Given a real random-like time series, the first question to answer is whether the data carry any information over time, i.e. whether the successive samples are correlated. Using standard statistical testing, the least interesting null hypothesis of white noise has to be rejected if the analysis of the time series should be of any use at all. Further, if nonlinear methods are to be used, e.g. a sophisticated nonlinear prediction method instead of a linear autoregressive model, the null hypothesis to be rejected is that the data involve only temporal linear correlations and are otherwise random. A statistically rigorous framework for such tests is provided by the method of surrogate data. The surrogate data, generated to represent the null hypothesis, are compared to the original data under a nonlinear discriminating statistic in order to reject or approve the null hypothesis.
The surrogate data test for nonlinearity has become popular in the last years, especially with regard to the null hypothesis that the examined time series is generated by a Gaussian (linear) process undergoing a possibly nonlinear static transform. Properly designed surrogate data for this null hypothesis should possess the same autocorrelation and amplitude distribution as the original data and be otherwise random. However, the algorithms do not always provide surrogate data that preserve the original linear correlations, and this can lead to false rejections. The rejection of the null hypothesis may also depend on the applied nonlinear method and the choice of the method’s parameters. Also, different observed time series from the same system may give different test results.
This chapter will describe the surrogate data test for the two hypotheses, i.e. white noise data and linear stochastic data. For the latter, some of the limitations and caveats of the test will be discussed and techniques to improve the robustness and reliability of the test will be reviewed. Finally, the tests will be applied to some financial data sets.
D. Kugiumtzis
Chapter 13. Validation of Selected Global Models
Abstract
Once a model is obtained from data, the important problem of validating such a model must be judiciously performed. A model is said to be a good model if it captures the relevant dynamical properties of the system studied. Several tools may be used to answer this question. There are geometrical invariants (correlation dimension, Lyapunov exponents, fixed points and so on) that can be computed for the model. On the other hand, if such invariants can be confidently estimated from the data, then they qualify as criteria to validate a dynamical model. Unfortunately, sometimes such invariants are not robust against noise perturbations and lack discrimination power, i.e. two different systems may exhibit the same dimension. Another approach to validate a model is to compare the topological properties of the model with the ones of the experimental data. This method is more robust against noise contamination than using the geometrical invariants but it may be achieved only for 3 dimensional systems. Another approach is to synchronize the reconstructed model with the experimental data. This method is based on the idea that when a model has captured the relevant dynamical properties of the system studied, it can be synchronized with experimental data by using a low coupling between the data and the model. If the model is synchronized, i.e. it generates a time series close to the experimental data itself, the model well reproduces the dynamics. This method has the advantage of being applicable to a large class of systems. This chapter discusses a range of validation techniques and their applicability to validate nonlinear dynamical models, especially those obtained from economic data. Limitations and advantages of each method will discussed.
C. Letellier, O. Ménard, L. A. Aguirre
Chapter 14. Testing Stationarity in Time Series
Abstract
We propose a procedure for testing stationarity of time series by combining a test for time independence of the probability density with one of the spectral density. The potentials of this test procedure are demonstrated by its application to various types of numerically simulated time series ranging from simple linear stochastic processes to high-dimensional transient chaos. Problems of practical implementation are discussed, in particular the relation between the lengths of the time series and its maximal relevant time scales. Stationarity is then tested for experimental data from geophysics and physiology. Exchange rates are found to be stationary on time scales of decades in the sense that their spectral densities do not significantly change.
Annette Witt, Jürgen Kurths
Chapter 15. Analysis of Economic Delayed-Feedback Dynamics
Abstract
Systems with a time-delayed feedback occur in various areas, for example in physics, climatology, physiology, and economy. In case of a nonlinear feedback, the systems can show complex behavior, like bifurcations, several types of oscillations, and chaotic solutions.
We propose a new technique for the analysis of deterministic nonlinear delayed-feedback systems from a time series of economic data. It is based on the concepts of maximal correlation and nonparametric regression analysis, and allows for testing time series for delay-induced dynamics and for estimating the delay times.
For high-quality data, the resulting models can be investigated themselves, which is a prerequisite for both an understanding of the feedback mechanism leading to the observed dynamics and model improvement. Since the method is nonparametric, it can be applied to a broad class of possible delay-induced dynamics.
We demonstrate the efficiency of this technique on numerical simulations of a Nerlove-Arrow model with time delay and other models. As a real-world financial data application, the time series of the gross private domestic investment of the USA is analyzed.
Henning U. Voss, Jürgen Kurths
Chapter 16. Global Modeling and Differential Embedding
Abstract
In order to reproduce the evolution of real economy over long period, a global model may be attempted to give a description of the dynamics with a small set of model coefficients. Then, the problem is to obtain a global model which is able to reproduce all the dynamical behavior of the data set studied starting from a set of initial conditions. Such a global model may be built on derivatives coordinates, i.e. the recorded time series and its successive derivatives. In this chapter, the mathematical background of a gobal modeling technique based on such a differential embedding will be exemplified on test cases of the real world (electrochemical and chemical experiments). Difficulties encountered in global modeling related to the nature of economic data records will be discussed. Properties of the time series required for a successful differential model will be defined.
J. Maquet, C. Letellier, G. Gouesbet
Chapter 17. Estimation of Deterministic and Stochastic Rules Underlying Fluctuating Data
Abstract
One basic aim of scientific research is to set up reasonable models for considered systems. A suitable model should reproduce the observed quantities and help to gain a deeper understanding of the system. Usually, collected data and known properties of the system, as symmetry relations for example, serve as a basis for this modelling. Very often, nonlinearities and dynamical noise cause fundamental problems. In this contribution, a general, data-drivenmethod for formulating suitable model equations for nonlinear complex systems is presented. The method is validated by application to artificially created time series. Furthermore, the results of an analysis of turbulent flow data and financial data sets are presented.
S. Siegert, R. Friedrich, Ch. Renner, J. Peinke
Chapter 18. Nonlinear Noise Reduction
Abstract
Several approaches can be taken in order to remove noise from measured data and to enhance the signal. Conventional linear filtering in the time or Fourier domain can be very powerful as long as nonlinear structures in the data are unimportant. Here, nonlinear means anything that is not fully characterized by second order statistics like the power spectrum. A recent alternative are nonlinear noise reduction methods that have been developed in the context of deterministic chaos. Although these methods have been designed for low-dimensional, stationary, chaotic signals, we will demonstrate that it is possible to extend the methods to a broader class of systems. In particular, we will review a local projective noise reduction scheme, discuss the algorithm and its parameters, and show applications to different signals, among them non-deterministic and non-stationary ones. In the last section, we discuss noise reduction for economic data.
Rainer Hegger, Holger Kantz, Thomas Schreiber
Chapter 19. Optimal Model Size
Abstract
We consider the selection of model size from the perspective of maximizing out-of-sample prediction power. The selection criteria consist of two components, the in-sample prediction error and a correction factor that measures the overfitting tendency of each model. The calculation of degrees of freedom is discussed for linear, nonlinear, and highly complex models.
Jianming Ye
Chapter 20. Influence of Measured Time Series in the Reconstruction of Nonlinear Multivariable Dynamics
Abstract
In this chapter, we will try to point out the important problem of the choice of a time series to study a dynamical system. Indeed, it has been shown that, in spite of Takens’ theorem which states that an equivalent space may be obtained from any general observing function, there are usually some variables that allow a better representation of the underlying dynamics. In other words, when only a single scalar time series can be recorded, the choice of the variable or a given quantity evolving with respect to time is crucial in the modeling and analysis of the dynamics underlying the data. We will give different illustrative examples for which representation of the dynamics is biased by the choice of the variable.
C. Letellier, L. A. Aguirre

Applications in Economics and Finance

Frontmatter
Chapter 21. Nonlinear Forecasting of Noisy Financial Data
Abstract
Two series, German mark/US dollar exchange rate and US consumer price index time series, are tested to illustrate if noise reduction could help to improve prediction. Three nonlinear noise reduction methods, local projective (LP), singular value decomposition (SVD) and simple nonlinear filtering (SNL), are used to generate the filtered time series. Different projection dimensions of the noise reduction methods are also selected for the sensitivity test on the prediction results. The results show that noise reduction does help in improving prediction in both of the examples providing that an appropriate method of noise reduction and suitable parameter values for the method are used.
Abdol S. Soofi, Liangyue Cao
Chapter 22. Canonical Variate Analysis and Its Applications to Financial Data
Abstract
We report on a novel forecasting method based on canonical variate analysis and non-linear Markov modelling, and investigate the use of a prediction algorithm to forecast conditional volatility. In particular, we assess the dynamic behaviour of the model by forecasting exchange rate volatility. Our non-linear Markov model forecasts exchange rate volatility significantly better than the GARCH(1,1)-t model. This indicates that there may be nonlinear dynamic patterns in volatility which are not captured by the linear GARCH(1,1)-t model.
Berndt Pilgram, Peter Verhoeven, Alistair Mees, Michael McAleer
Backmatter
Metadaten
Titel
Modelling and Forecasting Financial Data
herausgegeben von
Abdol S. Soofi
Liangyue Cao
Copyright-Jahr
2002
Verlag
Springer US
Electronic ISBN
978-1-4615-0931-8
Print ISBN
978-1-4613-5310-2
DOI
https://doi.org/10.1007/978-1-4615-0931-8