Skip to main content
main-content

Über dieses Buch

During the last two decades, considerable progress has been made in statistical time series analysis. The aim of this book is to present a survey of one of the most active areas in this field: the identification of autoregressive moving-average models, i.e., determining their orders. Readers are assumed to have already taken one course on time series analysis as might be offered in a graduate course, but otherwise this account is self-contained. The main topics covered include: Box-Jenkins' method, inverse autocorrelation functions, penalty function identification such as AIC, BIC techniques and Hannan and Quinn's method, instrumental regression, and a range of pattern identification methods. Rather than cover all the methods in detail, the emphasis is on exploring the fundamental ideas underlying them. Extensive references are given to the research literature and as a result, all those engaged in research in this subject will find this an invaluable aid to their work.

Inhaltsverzeichnis

Frontmatter

1. Introduction

Abstract
Consider the autoregressive moving-average (ARMA) model of orders p and q,
$$\phi \left( B \right){y_t} = \theta \left( B \right){v_t}$$
(1.1)
where \(\phi \left( B \right) = - {\phi _0} - {\phi _1}B - \cdots - {\phi _p}{B^p},\theta \left( B \right) = - {\theta _0} - {\theta _1}B - \cdots - {\theta _q}{B^q},{\theta _0} = {\theta _0} = - 1,{\phi _p} \ne 0,{\theta _q} \ne 0\), B is the backshift operator, and {v t } is a sequence of independent and identically distributed random variables with means 0 and variances σ2 (> 0). The sequence {v t } is called either a white noise process or an innovation process. In some time series books, the white noise process is defined as a sequence of uncorrelated random variables instead of that of independent random variables. In practical time series analysis, there is not as much difference between the two definitions. We assume that the model is stationary and invertible, i.e., the equations ϕ(z) = 0 and θ(z) = 0 have all the roots outside the unit circle. We assume that the two equations have no common root. This assumption is sometimes called coprimal. The stationarity and the invertibility conditions have been discussed by several authors. Interested readers may consult the references in Section 1.6. In statistical literature, the white noise process is frequently assumed to be Gaussian, i.e., normally distributed. There are some references about non-Gaussian ARMA processes in Section 1.6. In this book, we assume that the coefficients ϕ1…, ϕ p , ϕ1…, ϕ q , and the white noise variance σ2 are constants, i.e., they do not depend on time.
ByoungSeon Choi

2. The Autocorrelation Methods

Abstract
It is known (see, e.g., T. W. Anderson [1971, pp. 463–495]) that, for large T, the bias of the sample ACRF is
$$E\left( {{{\hat p}_k}} \right) - {p_k} = - \frac{1}{T} \sum\limits_{j = - \infty }^\infty {{p_j} + o\left( {\frac{1}{T}} \right)}.$$
.
ByoungSeon Choi

3. Penalty Function Methods

Abstract
Since the early 1970s, some estimation-type identification procedures have been proposed. They are to choose the orders k and i minimizing
$$P(k,i) = {\text{ln}}{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\sigma }}\mathop{{k,i}}\limits^{2} + (k + i)\frac{{C(T)}}{T}$$
, where σ k,i 2 is an estimate of the white noise variance obtained by fitting the ARMA(k, i) model to the observations. Because σ k,i 2 decreases as the orders increase, it cannot be a good criterion to choose the orders minimizing it. If the orders increase, the bias of the estimated model will decrease while the variance increases. Therefore, we should compromise between them. For this purpose we add the penalty term, (k + i)C(T)/T, into the model selection criterion The penalty function identification methods are regarded as objective.
ByoungSeon Choi

4. Innovation Regression Methods

Abstract
A penalty function identification of ARMA processes is to choose the orders minimizing
$${\text{ln}}{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\smile}$}}{\sigma }}\mathop{{k,i + }}\limits^{2} (k + i)\frac{{C(T)}}{T}$$
among k = 0,…, K and i = 0,…, I. Here σ k,i 2 is an estimate of the innovation variance obtained by fitting the ARMA(k, i) model to the observations and K and I are determined a priori as upper bounds of the orders. Because there are (K+1)×(I+1) possible ARMA models to be estimated, it is computationally onerous to apply ML estimation methods. Even though many algorithms have been presented to obtain the exact ML estimates as mentioned in Chapter 1, there are still many problems in applying them to all possible ARMA models. Especially if the MA part exists, then the ML estimates are not always on the stationary and invertible region. They are sensitive to the quality of starting values for the algorithms.
ByoungSeon Choi

5. Pattern Identification Methods

Abstract
Since the early 1980s some methods utilizing the EYW equations have been used for determining the orders of an ARM A process; these are often called the pattern identification methods. The penalty function identification methods discussed in Chapter 3 have the advantage of allowing automatic determination of the orders of an ARM A process. However, they are computationally expensive, for they need ML estimates for all possible ARM A models. Even though some innovation regression methods such as the HR and the KP methods in Chapter 4 may be used, they are usually computationally exorbitant. In contrast, the pattern identification methods are computationally cheap.
ByoungSeon Choi

6. Testing Hypothesis Methods

Abstract
Historically speaking, the hypothesis testing methods had been dominantly used to choose tentative ARMA models until Box-Jenkins’ identification method appeared. Nowadays they are primarily used for testing model inadequacy after choosing the orders and estimating the parameters, which Box and Jenkins (1976) called the model diagnostic checking. Therefore, we discuss them in this final chapter.
ByoungSeon Choi

Backmatter

Weitere Informationen