main-content

## Über dieses Buch

This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a “theorem-proof” format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods.

The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods.

To make the book as user friendly as possible, major supporting concepts and specialized tables are appended at the end of every chapter. In addition, each chapter concludes with a set of key terms and concepts, as well as a summary of the main findings. Lastly, the book offers numerous theoretical and empirical exercises, with answers provided by the author in an extensive solutions manual.

## Inhaltsverzeichnis

### Chapter 1. Introduction and Some Basic Concepts

Abstract
Informally, a time series is a record of a fluctuating quantity observed over time that has resulted from some underlying phenomenon. The set of times at which observations are measured can be equally spaced. In that case, the resulting series is called discrete. Continuous time series, on the other hand, are obtained when observations are taken continuously over a fixed time interval. The statistical analysis can take many forms. For instance, modeling the dynamic relationship of a time series, obtaining its characteristic features, forecasting future occurrences, and hypothesizing marginal statistics. Our concern is with time series that occur in discrete time and are realizations of a stochastic/random process.
Jan G. De Gooijer

### Chapter 2. Classic Nonlinear Models

Abstract
In Section 1.1, we discussed in some detail the distinction between linear and nonlinear time series processes. In order to make this distinction as clear as possible, we introduce in this chapter a number of classic parametric univariate nonlinear models. By “classic” we mean that during the relatively brief history of nonlinear time series analysis, these models have proved to be useful in handling many nonlinear phenomena in terms of both tractability and interpretability. The chapter also includes some of their generalizations. However, we restrict attention to univariate nonlinear models. By “univariate”, we mean that there is one output time series and, if appropriate, a related unidirectional input (exogenous) time series. In Chapter 11, we deal with vector (multivariate) parametric models in which there are several jointly dependent time series variables. Nonparametric univariate and multivariate methods will be the focus of Chapters 4, 9 and 12.
Jan G. De Gooijer

### Chapter 3. Probabilistic Properties

Abstract
From the previous two chapters we have seen that the richness of nonlinear models is fascinating: they can handle various nonlinear phenomena met in practice. However, before selecting a particular nonlinear model we need tools to fully understand the probabilistic and statistical characteristics of the underlying DGP. For instance, precise information on the stationarity (ergodicity) conditions of a nonlinear DGP is important to circumscribe a model’s parameter space or, at the very least, to verify whether a given set of parameters lies within a permissible parameter space. Conditions for invertibility are of equal interest. Indeed, we would like to check whether present events of a time series are associated with the past in a sensible manner using an NLMA specification. Moreover, verifying (geometric) ergodicity is required for statistical inference.
Jan G. De Gooijer

### Chapter 4. Frequency-Domain Tests

Abstract
The specification and estimation of a nonlinear model may be difficult in practice and sometimes no substantial improvements in forecasting accuracy can be achieved by using a nonlinear model instead of a familiar ARMA model. Therefore, one may wish to start the model building from a linear model and abandon it only if sufficiently strong evidence for a nonlinear alternative can be found. This approach can be applied using a linearity test, often in combination with a test for Gaussianity. Several test statistics, both in the time domain and frequency domain, have been proposed for this purpose.
Jan G. De Gooijer

### Chapter 5. Time-Domain Linearity Tests

Abstract
Time-domain linearity test statistics are parametric; that is, they test the null hypothesis that a time series is generated by a linear process against a pre-chosen particular nonlinear alternative. Using the classical theory of statistical hypothesis testing, time-domain test nonlinearity tests can be based on three principles – the likelihood ratio (LR), Lagrange multiplier (LM), and Wald (W) principles. LRbased test statistics require estimation of the model parameters under both the null and the alternative hypothesis, whereas tests statistics based on the LM principle require estimation only under the null hypothesis. Application of W-based test statistics implies that the model parameters under the alternative hypothesis need to be estimated. Hence, in the case of complicated nonlinear alternatives, containing many more parameters than the model under the null hypothesis, test statistics constructed from the LM principle are often preferred over test statistics based on the other two testing principles.
Jan G. De Gooijer

### Chapter 6. Model Estimation, Selection, and Checking

Abstract
Model estimation, selection, and diagnostic checking are three interwoven components of time series analysis. If, within a specified class of nonlinear models, a particular linearity test statistics indicates that the DGP underlying an observed time series is indeed a nonlinear process, one would ideally like to be able to select the correct lag structure and estimate the parameters of the model. In addition, one would like to know the asymptotic properties of the estimators in order to make statistical inference. Moreover, it is evident that a good, perhaps automatic, order selection procedure (or criterion) helps to identify the most appropriate model for the purpose at hand. Finally, it is common practice to test the series of standardized residuals for white noise via a residual-based diagnostic test statistic.
Jan G. De Gooijer

### Chapter 7. Tests for Serial Independence

Abstract
Testing for randomness of a given finite time series is one of the basic problems of statistical analysis. For instance, in many time series models the noise process is assumed to consist of i.i.d. random variables, and this hypothesis should be testable. Also, it is the first issue that gets raised when checking the adequacy of a fitted time series model through observed “residuals”, i.e. are they approximately i.i.d. or are there significant deviations from that assumption. In fact, many inference procedures apply only to i.i.d. processes.
Jan G. De Gooijer

### Chapter 8. Time-Reversibility

Abstract
Time-reversibility (TR) amounts to temporal symmetry in the probabilistic structure of a strictly stationary time series process. In other words, a stochastic process is said to be TR if its probabilistic structure is unaffected by reversing (“mirroring”) the direction of time. Otherwise, the process is said to be time-irreversible, or non-reversible. Confirmation of time-irreversibility is important because, according to Cox (1981), it is a symptom of nonlinearity and/or non-Gaussianity. In the analysis of business cycles, for instance, the peaks and troughs of a business time series differ in magnitude, not just in sign, as the dynamics of contractions in an economy are more violent but also more short-lived than the expansions, indicating asymmetric cycles. Time irreversible behavior may also naturally arise in stochastic processes considered in, for instance, quantum mechanics, biomedicine, queuing theory, system engineering, and financial economics. Time-irreversibility automatically excludes Gaussian linear processes, or static nonlinear transformations of such processes, as possible DGPs.
Jan G. De Gooijer

### Chapter 9. Semi- and Nonparametric Forecasting

Abstract
The time series methods we have discussed so far can be loosely classified as parametric (see, e.g., Chapter 5), and semi- and nonparametric (see, e.g., Chapter 7). For the parametric methods, usually a quite flexible but well-structured family of finitedimensional models are considered (Chapter 2), and the modeling process typically consists of three iterative steps: identification, estimation, and diagnostic checking. Often these steps are complemented with an additional task: out-of-sample forecasting. Within this setting, specification of the functional form of a parametric time series model generally arrives from theory or from previous analysis of the underlying DGP; in both cases a great deal of knowledge must be incorporated in the modeling process. Semi- and nonparametric methods, on the other hand, are infinite-dimensional. These methods assume very little a priori information and instead base statistical inference mainly on data. Moreover, they require “weak” (qualitative) assumptions, such as smoothness of the functional form, rather than quantitative assumptions on the global form of the model.
Jan G. De Gooijer

### Chapter 10. Forecasting

Abstract
As we saw in Chapter 9, it is fairly straightforward to forecast future values of a time series process using semi- and nonparametric methods, given data up to a certain time t. In contrast, the situation becomes more complicated when real out-of-sample forecast are computed from parametric nonlinear time series models; in particular, as we explain below, this is a difficult issue for H ≥ 2 steps ahead.
Jan G. De Gooijer

### Chapter 11. Vector Parametric Models and Methods

Abstract
In this chapter, we extend the univariate nonlinear parametric time series framework to encompass multiple, related time series exhibiting nonlinear behavior. Over the past few years, many multivariate (vector) nonlinear time series models have been proposed. Some of them are “ad - hoc”, with a special application in mind. Others are direct multivariate extensions of their univariate counterparts. Within the latter class, a definition of a multivariate nonlinear time series model is often proposed with the following objectives in mind. First, the definition should contain the most general linear vector model as a special case when the nonlinear part is not present. This is analogous to univariate nonlinear time series models embedding linear ones. Second, the definition should contain the most general univariate nonlinear model within its class of models. Also, a potential candidate for a multivariate nonlinear time series model should possess some specified properties in order to permit estimation of the unknown model parameters and allow statistical inference. Moreover, because one of the main uses of time series analysis is forecasting, it is reasonable to restrict consideration to models which are capable of producing forecasts.
Jan G. De Gooijer

### Chapter 12. Vector Semi- and Nonparametric Methods

Abstract
Quite often it is not possible to postulate an appropriate parametric form for the DGP under study. In such cases, semi- and nonparametric methods are called for. Certain of these methods introduced in Chapter 9 can be easily extended to the multivariate (vector) framework. Specifically, let $$Y_t\;=\;(Y_{1,t},\ldots,Y_{m,t})\prime$$ denote an m-dimensional process.
Jan G. De Gooijer

### Backmatter

Weitere Informationen