Skip to main content
Top

1991 | Book | 2. edition

Time Series: Theory and Methods

Authors: Peter J. Brockwell, Richard A. Davis

Publisher: Springer New York

Book Series : Springer Series in Statistics

insite
SEARCH

About this book

This edition contains a large number of additions and corrections scattered throughout the text, including the incorporation of a new chapter on state-space models. The companion diskette for the IBM PC has expanded into the software package ITSM: An Interactive Time Series Modelling Package for the PC, which includes a manual and can be ordered from Springer-Verlag. * We are indebted to many readers who have used the book and programs and made suggestions for improvements. Unfortunately there is not enough space to acknowledge all who have contributed in this way; however, special mention must be made of our prize-winning fault-finders, Sid Resnick and F. Pukelsheim. Special mention should also be made of Anthony Brockwell, whose advice and support on computing matters was invaluable in the preparation of the new diskettes. We have been fortunate to work on the new edition in the excellent environments provided by the University of Melbourne and Colorado State University. We thank Duane Boes particularly for his support and encouragement throughout, and the Australian Research Council and National Science Foundation for their support of research related to the new material. We are also indebted to Springer-Verlag for their constant support and assistance in preparing the second edition. Fort Collins, Colorado P. J. BROCKWELL November, 1990 R. A. DAVIS * /TSM: An Interactive Time Series Modelling Package for the PC by P. J. Brockwell and R. A. Davis. ISBN: 0-387-97482-2; 1991.

Table of Contents

Frontmatter
Chapter 1. Stationary Time Series
Abstract
In this chapter we introduce some basic ideas of time series analysis and stochastic processes. Of particular importance are the concepts of stationarity and the autocovariance and sample autocovariance functions. Some standard techniques are described for the estimation and removal of trend and seasonality (of known period) from an observed series. These are illustrated with reference to the data sets in Section 1.1. Most of the topics covered in this chapter will be developed more fully in later sections of the book. The reader who is not already familiar with random vectors and multivariate analysis should first read Section 1.6 where a concise account of the required background is given. Notice our convention that an n-dimensional random vector is assumed (unless specified otherwise) to be a column vector X = (X 1, X 2, ... , X n )′ of random variables. If S is an arbitrary set then we shall use the notation S n to denote both the set of n-component column vectors with components in S and the set of n-component row vectors with components in S.
Peter J. Brockwell, Richard A. Davis
Chapter 2. Hilbert Spaces
Abstract
Although it is possible to study time series analysis without explicit use of Hilbert space terminology and techniques, there are great advantages to be gained from a Hilbert space formulation. These are largely derived from our familiarity with two- and three-dimensional Euclidean geometry and in particular with the concepts of orthogonality and orthogonal projections in these spaces. These concepts, appropriately extended to infinite-dimensional Hilbert spaces, play a central role in the study of random variables with finite second moments and especially in the theory of prediction of stationary processes.
Peter J. Brockwell, Richard A. Davis
Chapter 3. Stationary ARMA Processes
Abstract
In this chapter we introduce an extremely important class of time series {X t , t = 0, ± 1, ± 2,...} defined in terms of linear difference equations with constant coefficients. The imposition of this additional structure defines a parametric family of stationary processes, the autoregressive moving average or ARMA processes. For any autocovariance function γ(·) such that lim h→∞ γ(h) = 0, and for any integer k > 0, it is possible to find an ARMA process with autocovariance function γ X (·) such that γ X (h) = γ(h), h = 0, 1,...., k. For this (and other) reasons the family of ARMA processes plays a key role in the modelling of time-series data. The linear structure of ARMA processes leads also to a very simple theory of linear prediction which is discussed in detail in Chapter 5.
Peter J. Brockwell, Richard A. Davis
Chapter 4. The Spectral Representation of a Stationary Process
Abstract
The spectral representation of a stationary process {X t , t = 0, ± 1, ... } essentially decomposes {X t } into a sum of sinusoidal components with uncorrelated random coefficients. In conjunction with this decomposition there is a corresponding decomposition into sinusoids of the autocovariance function of {X t }. The spectral decomposition is thus an analogue for stationary stochastic processes of the more familiar Fourier representation of deterministic functions. The analysis of stationary processes by means of their spectral representations is often referred to as the “frequency domain” analysis of time series. It is equivalent to “time domain” analysis, based on the autocovariance function, but provides an alternative way of viewing the process which for some applications may be more illuminating. For example in the design of a structure subject to a randomly fluctuating load it is important to be aware of the presence in the loading force of a large harmonic with a particular frequency to ensure that the frequency in question is not a resonant frequency of the structure. The spectral point of view is particularly advantageous in the analysis of multivariate stationary processes (Chapter 11) and in the analysis of very large data sets, for which numerical calculations can be performed rapidly using the fast Fourier transform (Section 10.7).
Peter J. Brockwell, Richard A. Davis
Chapter 5. Prediction of Stationary Processes
Abstract
In this chapter we investigate the problem of predicting the values {X t , tn + 1} of a stationary process in terms of {X 1,..., X n }. The idea is to utilize observations taken at or before time n to forecast the subsequent behaviour of {X t }. Given any closed subspace of L 2(Ω, , P), the best predictor in of X n +h is defined to be the element of with minimum mean-square distance from X n +h. This of course is not the only possible definition of “best”, but for processes with finite second moments it leads to a theory of prediction which is simple, elegant and useful in practice. (In Chapter 13 we shall introduce alternative criteria which are needed for the prediction of processes with infinite second-order moments.) In Section 2.7, we showed that the projections are respectively the best function of X 1,..., X n and the best linear combination of 1, X 1,..., X n for predicting X n+h . For the reasons given in Section 2.7 we shall concentrate almost exclusively on predictors of the latter type (best linear predictors) instead of attempting to work with conditional expectations.
Peter J. Brockwell, Richard A. Davis
Chapter 6. Asymptotic Theory
Abstract
In order to carry out statistical inference for time series it is necessary to be able to derive the distributions of various statistics used for the estimation of parameters from the data. For finite n the exact distribution of such a statistic f n (X 1,...,X n ) is usually (even for Gaussian processes) prohibitively complicated. In such cases, we can still however base the inference on large-sample approximations to the distribution of the statistic in question. The mathematical tools for deriving such approximations are developed in this chapter. A comprehensive treatment of asymptotic theory is given in the book of Serfling (1980). Chapter 5 of the book by Billingsley (1986) is also strongly recommended.
Peter J. Brockwell, Richard A. Davis
Chapter 7. Estimation of the Mean and the Autocovariance Function
Abstract
If {X t } is a real-valued stationary process, then from a second-order point of view it is characterized by its mean μ and its autocovariance function γ(•). The estimation of μ, γ(•) and the autocorrelation function ρ(•) = γ(•)/γ(0) from observations of X 1, ..., X n , therefore plays a crucial role in problems of inference and in particular in the problem of constructing an appropriate model for the data. In this chapter we consider several estimators which will be used and examine some of their properties.
Peter J. Brockwell, Richard A. Davis
Chapter 8. Estimation for ARMA Models
Abstract
The determination of an appropriate ARMA(p, q) model to represent an observed stationary time series involves a number of inter-related problems. These include the choice of p and q (order selection), and estimation of the remaining parameters, i.e. the mean, the coefficients {φ i , θ j : i = 1,..., p; j = 1,..., q} and the white noise variance σ 2, for given values of p and q. Goodness of fit of the model must also be checked and the estimation procedure repeated with different values of p and q. Final selection of the most appropriate model depends on a variety of goodness of fit tests, although it can be systematized to a large degree by use of criteria such as the AICC statistic discussed in Chapter 9.
Peter J. Brockwell, Richard A. Davis
Chapter 9. Model Building and Forecasting with ARIMA Processes
Abstract
In this chapter we shall examine the problem of selecting an appropriate model for a given set of observations n{X t , t = 1,…, n} data (a) exhibits no apparent deviations from stationarity and (b) has a rapidly decreasing autocorrelation function, we shall seek a suitable ARMA process to represent the mean-corrected data. If not, then we shall first look for a transformation of the data which generates a new series with the properties (a) and (b). This can frequently be achieved by differencing, leading us to consider the class of ARIMA (autoregressive-integrated moving average) processes which is introduced in Section 9.1. Once the data has been suitably transformed, the problem becomes one of finding a satisfactory ARMA(p, q) model, and in particular of choosing (or identifying) p and q. The sample autocorrelation and partial autocorrelation functions and the preliminary estimators \({\hat \phi _m}\) and \({\hat \theta _m}\) of Sections 8.2 and 8.3 can provide useful guidance in this choice. However our prime criterion for model selection will be the AICC, a modified version of Akaike’s AIC, which is discussed in Section 9.3. According to this criterion we compute maximum likelihood estimators of φ, θ and σ2 for a variety of competing p and q values and choose the fitted model with smallest AICC value. Other techniques, in particular those which use the R and S arrays of Gray et al. (1978), are discussed in the recent survey of model identification by de Gooijer et al. (1985).
Peter J. Brockwell, Richard A. Davis
Chapter 10. Inference for the Spectrum of a Stationary Process
Abstract
In this chapter we consider problems of statistical inference for time series based on frequency-domain properties of the series. The fundamental tool used is the periodogram, which is defined in Section 10.1 for any time series {x 1,..., x n }. Section 10.2 deals with statistical tests for the presence of “hidden periodicities” in the data. Several tests are discussed, corresponding to various different models and hypotheses which we may wish to test. Spectral analysis for stationary time series, and in particular the estimation of the spectral density, depends very heavily on the asymptotic distribution as n → ∞ of the periodogram ordinates of the series {X 1,..., X n }. The essential results are contained in Theorem 10.3.2. Under rather general conditions, the periodogram ordinates I n (λ i ) at any set of frequencies λ 1,..., λ m , 0 < λ 1 < ⋯ < λ m < π, are asymptotically independent exponential random variables with means 2πf(λ i ), were f is the spectral density of {X t }. Consequently the periodogram I n is not a consistent estimator of 2πf. Consistent estimators can however be constructed by applying linear smoothing filters to the periodogram. The asymptotic behaviour of the resulting discrete spectral average estimators can be derived from the asymptotic behaviour of the periodogram as shown in Section 10.4. Lag-window estimators of the form \({\left( {2\pi } \right)^{ - 1}}\sum\nolimits_{\left| h \right| \leqslant r} {w\left( {h/r} \right)\hat \gamma \left( h \right)} {e^{ - ih\omega }}\), where w(x), −1 ≤ x ≤ 1, is a suitably chosen weight function, are also discussed in Section 10.4 and compared with discrete spectral average estimators. Approximate confidence intervals for the spectral density are given in Section 10.5. An alternative approach to spectral density estimation, based on fitting an AR MA model to the data and computing the spectral density of the fitted process, is discussed in Section 10.6. An important role in the development of spectral analysis has been played by the fast Fourier transform algorithm, which makes possible the rapid calculation of the periodogram for very large data sets. An introduction to the algorithm and its application to the computation of autocovariances is given in Section 10.7. The chapter concludes with a discussion of the asymptotic behaviour of the maximum likelihood estimators of the coefficients of an ARMA(p, q) process.
Peter J. Brockwell, Richard A. Davis
Chapter 11. Multivariate Time Series
Abstract
Many time series arising in practice are best considered as components of some vector-valued (multivariate) time series {X t } whose specification includes not only the serial dependence of each component series {X tj } but also the interdependence between different component series {X ti } and {X tj }. From a second order point of view a stationary multivariate time series is determined by its mean vector, μ = E X, and its covariance matrices Γ(h) = E(X t +h Xt′) — μμ′, h = 0, ± 1,.... Most of the basic theory of univariate time series extends in a natural way to multivariate series but new problems arise. In this chapter we show how the techniques developed earlier for univariate series are extended to the multivariate case. Estimation of the basic quantities μ and Γ(·) is considered in Section 11.2. In Section 11.3 we introduce multivariate ARMA processes and develop analogues of some of the univariate results in Chapter 3. The prediction of stationary multivariate processes, and in particular of ARMA processes, is treated in Section 11.4 by means of a multivariate generalization of the innovations algorithm used in Chapter 5. This algorithm is then applied in Section 11.5 to simplify the calculation of the Gaussian likelihood of the observations {X 1, X 2,..., X n } of a multivariate ARMA process. Estimation of parameters using maximum likelihood and (for autoregressive models) the Yule—Walker equations is also considered. In Section 11.6 we discuss the cross spectral density of a bivariate stationary process {X t } and its interpretation in terms of the spectral representation of {X t }. (The spectral representation is discussed in more detail in Section 11.8.) The bivariate periodogram and its asymptotic properties are examined in Section 11.7 and Theorem 11.7.1 gives the asymptotic joint distribution for a linear process of the periodogram matrices at frequencies λ 1, λ 2,..., λ m ∈ (0, π). Smoothing of the periodogram is used to estimate the cross-spectrum and hence the cross-amplitude spectrum, phase spectrum and squared coherency for which approximate confidence intervals are given. The chapter ends with an introduction to the spectral representation of an m-variate stationary process and multivariate linear filtering.
Peter J. Brockwell, Richard A. Davis
Chapter 12. State-Space Models and the Kalman Recursions
Abstract
In recent years, state-space representations and the associated Kalman recursions have had a profound impact on time series analysis and many related areas. The techniques were originally developed in connection with the control of linear systems (for accounts of this subject, see the books of Davis and Vinter (1985) and Hannan and Deistler (1988)). The general form of the state-space model needed for the applications in this chapter is defined in Section 12.1, where some illustrative examples are also given. The Kalman recursions are developed in Section 12.2 and applied in Section 12.3 to the analysis of ARMA and ARIMA processes with missing values. In Section 12.4 we examine the fundamental concepts of controllability and observability and their relevance to the determination of the minimal dimension of a state-space representation. Section 12.5 deals with recursive Bayesian state estimation, which can be used (at least in principle) to compute conditional expectations for a large class of not necessarily Gaussian processes. Further applications of the Bayesian approach can be found in the papers of Sorenson and Alspach (1971), Kitagawa (1987) and Grunwald, Raftery and Guttorp (1989).
Peter J. Brockwell, Richard A. Davis
Chapter 13. Further Topics
Abstract
In this final chapter we touch on a variety of topics of special interest. In Section 13.1 we consider transfer function models, designed to exploit, for predictive purposes, the relationship between two time series when one leads the other. Section 13.2 deals with long-memory models, characterized by very slow convergence to zero of the autocorrelations ρ(h) as h → ∞oo. Such models are suggested by numerous observed series in hydrology and economics. In Section 13.3 we examine linear time-series models with infinite variance and in Section 13.4 we briefly consider non-linear models and their applications.
Peter J. Brockwell, Richard A. Davis
Backmatter
Metadata
Title
Time Series: Theory and Methods
Authors
Peter J. Brockwell
Richard A. Davis
Copyright Year
1991
Publisher
Springer New York
Electronic ISBN
978-1-4419-0320-4
Print ISBN
978-1-4419-0319-8
DOI
https://doi.org/10.1007/978-1-4419-0320-4