1991 | OriginalPaper | Buchkapitel
Model Building and Forecasting with ARIMA Processes
verfasst von : Peter J. Brockwell, Richard A. Davis
Erschienen in: Time Series: Theory and Methods
Verlag: Springer New York
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this chapter we shall examine the problem of selecting an appropriate model for a given set of observations n{X t , t = 1,…, n} data (a) exhibits no apparent deviations from stationarity and (b) has a rapidly decreasing autocorrelation function, we shall seek a suitable ARMA process to represent the mean-corrected data. If not, then we shall first look for a transformation of the data which generates a new series with the properties (a) and (b). This can frequently be achieved by differencing, leading us to consider the class of ARIMA (autoregressive-integrated moving average) processes which is introduced in Section 9.1. Once the data has been suitably transformed, the problem becomes one of finding a satisfactory ARMA(p, q) model, and in particular of choosing (or identifying) p and q. The sample autocorrelation and partial autocorrelation functions and the preliminary estimators $${\hat \phi _m}$$ and $${\hat \theta _m}$$ of Sections 8.2 and 8.3 can provide useful guidance in this choice. However our prime criterion for model selection will be the AICC, a modified version of Akaike’s AIC, which is discussed in Section 9.3. According to this criterion we compute maximum likelihood estimators of φ, θ and σ2 for a variety of competing p and q values and choose the fitted model with smallest AICC value. Other techniques, in particular those which use the R and S arrays of Gray et al. (1978), are discussed in the recent survey of model identification by de Gooijer et al. (1985).