Skip to main content

1996 | Buch

Introduction to Time Series and Forecasting

verfasst von: Peter J. Brockwell, Richard A. Davis

Verlag: Springer New York

Buchreihe : Springer Texts in Statistics

insite
SUCHEN

Über dieses Buch

Some of the key mathematical results are stated without proof in order to make the underlying theory acccessible to a wider audience. The book assumes a knowledge only of basic calculus, matrix algebra, and elementary statistics. The emphasis is on methods and the analysis of data sets. The logic and tools of model-building for stationary and non-stationary time series are developed in detail and numerous exercises, many of which make use of the included computer package, provide the reader with ample opportunity to develop skills in this area. The core of the book covers stationary processes, ARMA and ARIMA processes, multivariate time series and state-space models, with an optional chapter on spectral analysis. Additional topics include harmonic regression, the Burg and Hannan-Rissanen algorithms, unit roots, regression with ARMA errors, structural models, the EM algorithm, generalized state-space models with applications to time series of count data, exponential smoothing, the Holt-Winters and ARAR forecasting algorithms, transfer function models and intervention analysis. Brief introducitons are also given to cointegration and to non-linear, continuous-time and long-memory models. The time series package included in the back of the book is a slightly modified version of the package ITSM, published separately as ITSM for Windows, by Springer-Verlag, 1994. It does not handle such large data sets as ITSM for Windows, but like the latter, runs on IBM-PC compatible computers under either DOS or Windows (version 3.1 or later). The programs are all menu-driven so that the reader can immediately apply the techniques in the book to time series data, with a minimal investment of time in the computational and algorithmic aspects of the analysis.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
In this chapter we introduce some basic ideas of time series analysis and stochastic processes. Of particular importance are the concepts Of stationarity and the autocovariance and sample autocovariance functions. Some standard techniques are described for the estimation and removal of trend and seasonality (of known period) from an observed time series. These are illustrated with reference to the data sets in Section 1.1. The calculations in all the examples can be carried out using the programs supplied on the enclosed diskette. The data sets are contained in files with names ending in.DAT. For example, the Australian red wine sales are filed as WINE.DAT. Most of the topics covered in this chapter will be developed more fully in later sections of the book. The reader who is not already familiar with random variables and random vectors should first read Appendix A where a concise account of the required background is given.
Peter J. Brockwell, Richard A. Davis
2. Stationary Processes
Abstract
A key role in time series analysis is played by processes whose properties, or some of them, do not vary with time. If we wish to make predictions, then clearly we must assume that something does not vary with time. In extrapolating deterministic functions it is common practice to assume that either the function itself or one of its derivatives is constant. The assumption of a constant first derivative leads to linear extrapolation as a means of prediction. In time series analysis our goal is to predict a series that typically is not deterministic but contains a random component. If this random component is stationary, in the sense of Definition 1.4.2, then we can develop powerful techniques to forecast its future values. These techniques will be developed and discussed in this and subsequent chapters.
Peter J. Brockwell, Richard A. Davis
3. ARMA Models
Abstract
In this chapter, we introduce an important parametric family of stationary time series, the autoregressive moving average or ARMA processes. For a large class of autocovariance functions γ (•), it is possible to find an ARMA process {X t } with ACVF γ X (·) such that γ(·) is well approximated by γ X (·). In particular, for any positive integer K, there exists an ARMA process {X t } such that γ X (h) = γ(h) for h = 0, 1,.... K. For this (and other) reasons, the family of ARMA processes plays a key role in the modelling of time series data. The linear structure of ARMA processes also leads to a substantial simplification of the general methods for linear prediction discussed earlier in Section 2.5.
Peter J. Brockwell, Richard A. Davis
4. Spectral Analysis
Abstract
This chapter can be omitted without any loss of continuity. The reader with no background in Fourier or complex analysis should go straight to Chapter 5. The spectral representation of a stationary time series {X t } essentially decomposes {X t } into a sum of sinusoidal components with uncorrelated random coefficients. In conjunction with this decomposition there is a corresponding decomposition into sinusoids of the autocovariance function of {X t }. The spectral decomposition is thus an analogue for stationary processes of the more familiar Fourier representation of deterministic functions. The analysis of stationary processes by means of their spectral representation is often referred to as the “frequency domain analysis” of time series or “spectral analysis.” It is equivalent to “time domain” analysis based on the autocovariance function, but provides an alternative way of viewing the process, which for some applications may be more illuminating. For example, in the design of a structure subject to a randomly fluctuating load, it is important to be aware of the presence in the loading force of a large sinusoidal component with a particular frequency to ensure that this is not a resonant frequency of the structure. The spectral point of view is also particularly useful in the analysis of multivariate stationary processes and in the analysis of linear filters. In Section 4.1 we introduce the spectral density of a stationary process {X t }, which specifies the frequency decomposition of the autocovariance function, and the closely related spectral representation (or frequency decomposition) of the process {X t } itself. Section 4.2 deals with the periodogram, a sample-based function from which we obtain estimators of the spectral density. In Section 4.3 we discuss time-invariant linear filters from a spectral point of view and in Section 4.4 we use the results to derive the spectral density of an arbitrary ARMA process.
Peter J. Brockwell, Richard A. Davis
5. Modelling and Forecasting with ARMA Processes
Abstract
The determination of an appropriate ARMA(p, q) model to represent an observed stationary time series involves a number of interrelated problems. These include the choice of p and q (order selection) and estimation of the mean, the coefficients {φ i , i = 1,..., p}, {θ i , i = 1,..., q}, and the white noise variance σ 2. Final selection of the model depends on a variety of goodness of fit tests, although it can be systematized to a large degree by use of criteria such as the AICC statistic discussed in Section 5.5.
Peter J. Brockwell, Richard A. Davis
6. Nonstationary and Seasonal Time Series Models
Abstract
In this chapter we shall examine the problem of finding an appropriate model for a given set of observations {x 1 ,..., x n } that are not necessarily generated by a stationary time series. If the data (a) exhibit no apparent deviations from stationarity and (b) have a rapidly decreasing autocovariance function, we attempt to fit an ARMA model to the mean-corrected data using the techniques developed in Chapter 5. Otherwise we look first for a transformation of the data that generates a new series with the properties (a) and (b). This can frequently be achieved by differencing, leading us to consider the class of ARIMA (autoregressive integrated moving average) models, defined in Section 6.1. We have in fact already encountered ARIMA processes. The model fitted in Example 5.1.1 to the Dow-Jones Utilities Index was obtained by fitting an AR model to the differenced data, thereby effectively fitting an ARIMA model to the original series. In Section 6.1 we shall give a more systematic account of such models.
Peter J. Brockwell, Richard A. Davis
7. Multivariate Time Series
Abstract
Many time series arising in practice are best considered as components of some vector-valued (multivariate) time series {X t } having not only serial dependence within each component series {X ti } but also interdependence between the different component series {X ti } and {X tj }, ij Much of the theory of univariate time series extends in a natural way to the multivariate case; however, new problems arise. In this chapter we introduce the basic properties of multivariate series and consider the multivariate extensions of some of the techniques developed earlier. In Section 7.1 we introduce two sets of bivariate time series data for which we develop multivariate models later in the chapter. In Section 7.2 we discuss the basic properties of stationary multivariate time series, namely the mean vector μ = E X t and the covariance matrices Γ(h) = E(X t+h X t ′) − μμ′, h = 0, ±1, ±2,..., with reference to some simple examples, including multivariate white noise. Section 7.3 deals with estimation of μ and Γ(•) and the question of testing for serial independence on the basis of observations of X 1,... , X n . In Section 7.4 we introduce multivariate ARMA processes. and illustrate the problem of multivariate model identification with an example of a multivariate AR(1) process that also has an MA(1) representation. (Such examples do not exist in the univariate case.) The identification problem can be avoided by confining attention to multivariate autoregressive (or VAR) models. Forecasting multivariate time series with known second-order properties is discussed in Section 7.5, and in Section 7.6 we consider the modelling and forecasting of multivariate time series using the multivariate Yule-Walker equations and Whittle’s generalization of the Durbin-Levinson algorithm. Section 7.7 contains a brief introduction to the notion of cointegrated time series.
Peter J. Brockwell, Richard A. Davis
8. State-Space Models
Abstract
In recent years state-space representations and the associated Kalman recursions have had a profound impact on time series analysis and many related areas. The techniques were originally developed in connection with the control of linear systems (for accounts of this subject see Davis and Vinter, 1985, and Hannan and Deistler, 1988). An extremely rich class of models for time series, including and going well beyond the linear ARIMA and classical decomposition models considered so far in this book, can be formulated as special cases of the general state-space model defined below in Section 8.1. In econometrics the structural time series models developed by Harvey (1990) are formulated (like the classical decomposition model) directly in terms of components of interest such as trend, seasonal component, and noise. However, the rigidity of the classical decomposition model is avoided by allowing the trend and seasonal components to evolve randomly rather than deterministically. An introduction to these structural models is given in Section 8.2 and a state-space representation is developed for a general ARIMA process in Section 8.3. The Kalman recursions, which play a key role in the analysis of state-space models, are derived in Section 8.4. These recursions allow a unified approach to prediction and estimation for all processes that can be given a state-space representation. Following the development of the Kalman recursions we discuss estimation with structural models (Section 8.5) and the formulation of state-space models to deal with missing values (Section 8.6).
Peter J. Brockwell, Richard A. Davis
9. Forecasting Techniques
Abstract
We have focused until now on the construction of time series models for stationary and nonstationary series and the determination, assuming the appropriateness of these models, of minimum mean-squared error predictors. If the observed series had in fact been generated by the fitted model, this procedure would give minimum mean-squared error forecasts. In this chapter we discuss three forecasting techniques that have less emphasis on the explicit construction of a model for the data. Each of the three selects, from a limited class of algorithms, the one that is optimal according to specified criteria.
Peter J. Brockwell, Richard A. Davis
10. Further Topics
Abstract
In this final chapter we touch on a variety of topics of special interest. In Section 10.1 we consider transfer function models, designed to exploit for predictive purposes the relationship between two time series when one acts as a leading indicator for the other. Section 10.2 deals with intervention analysis, which allows for possible changes in the mechanism generating a time series, causing it to have different properties over different time intervals. In Section 10.3 we introduce the very fast growing area of nonlinear time series analysis, and in Section 10.4 we briefly discuss continuous-time ARMA processes, which, besides being of interest in their own right, are very useful also for modelling irregularly spaced data. In Section 10.5 we discuss fractionally integrated ARMA processes, sometimes called “long-memory” processes on account of the slow rate of convergence of their autocorrelation functions to zero as the lag increases.
Peter J. Brockwell, Richard A. Davis
Backmatter
Metadaten
Titel
Introduction to Time Series and Forecasting
verfasst von
Peter J. Brockwell
Richard A. Davis
Copyright-Jahr
1996
Verlag
Springer New York
Electronic ISBN
978-1-4757-2526-1
Print ISBN
978-1-4757-2528-5
DOI
https://doi.org/10.1007/978-1-4757-2526-1