Skip to main content

2005 | Buch

New Introduction to Multiple Time Series Analysis

verfasst von: Professor Dr. Helmut Lütkepohl

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

When I worked on my Introduction to Multiple Time Series Analysis (Lutk ¨ ¨- pohl (1991)), a suitable textbook for this ?eld was not available. Given the great importance these methods have gained in applied econometric work, it is perhaps not surprising in retrospect that the book was quite successful. Now, almost one and a half decades later the ?eld has undergone substantial development and, therefore, the book does not cover all topics of my own courses on the subject anymore. Therefore, I started to think about a serious revision of the book when I moved to the European University Institute in Florence in 2002. Here in the lovely hills of ToscanyIhadthetimetothink about bigger projects again and decided to prepare a substantial revision of my previous book. Because the label Second Edition was already used for a previous reprint of the book, I decided to modify the title and thereby hope to signal to potential readers that signi?cant changes have been made relative to my previous multiple time series book.

Inhaltsverzeichnis

Frontmatter

Introduction

1. Introduction
Abstract
In making choices between alternative courses of action, decision makers at all structural levels often need predictions of economic variables. If time series observations are available for a variable of interest and the data from the past contain information about the future development of a variable, it is plausible to use as forecast some function of the data collected in the past. For instance, in forecasting the monthly unemployment rate, from past experience a forecaster may know that in some country or region a high unemployment rate in one month tends to be followed by a high rate in the next month. In other words, the rate changes only gradually. Assuming that the tendency prevails in future periods, forecasts can be based on current and past data.
Helmut Lütkepohl

Finite Order Vector Autoregressive Processes

Frontmatter
2. Stable Vector Autoregressive Processes
Abstract
In this chapter, the basic, stationary finite order vector autoregressive (VAR) model will be introduced. Some important properties will be discussed. The main uses of vector autoregressive models are forecasting and structural analysis. These two uses will be considered in Sections 2.2 and 2.3. Throughout this chapter, the model of interest is assumed to be known. Although this assumption is unrealistic in practice, it helps to see the problems related to VAR models without contamination by estimation and specification issues. The latter two aspects of an analysis will be treated in detail in subsequent chapters.
Helmut Lütkepohl
3. Estimation of Vector Autoregressive Processes
Abstract
In this chapter, it is assumed that a K-dimensional multiple time series \(y_1 , \ldots ,y_T \;with\;y_t = \left( {y_{1t} , \ldots ,y_{Kt} } \right)^\prime \) is available that is known to be generated by a stationary, stable VAR(p) process
$$ y_t = v + A_{1yt - 1} + \ldots + A_p y_{t - p} + u_t . $$
(3.1.1)
All symbols have their usual meanings, that is, ν = (ν1,…, ν K )′ is a (K × 1) vector of intercept terms, the A i are (K × K) coefficient matrices and u t is white noise with nonsingular covariance matrix Σu. In contrast to the assumptions of the previous chapter, the coefficients ν, A 1,…, A p , and Σu are assumed to be unknown in the following. The time series data will be used to estimate the coefficients. Note that notationwise we do not distinguish between the stochastic process and a time series as a realization of a stochastic process. The particular meaning of a symbol should be obvious from the context.
Helmut Lütkepohl
4. VAR Order Selection and Checking the Model Adequacy
Abstract
In the previous chapter, we have assumed that we have given a K-dimensional multiple time series \(y_1 , \ldots ,y_T ,\;with\;y_t = \left( {y_{1t} , \ldots ,y_{Kt} } \right)^\prime , \) which is known to be generated by a VAR(p) process,
$$ y_t = v + A_{1yt - 1} + \ldots + A_p y_{t - p} + u_t , $$
(4.1.1)
and we have discussed estimation of the parameters \( \nu ,A_1 , \ldots ,A_p ,\;and\;\sum _u = E\left( {u_t u'_t } \right). \) In deriving the properties of the estimators, a number of assumptions were made. In practice, it will rarely be known with certainty whether the conditions hold that are required to derive the consistency and asymptotic normality of the estimators. Therefore statistical tools should be used in order to check the validity of the assumptions made. In this chapter, some such tools will be discussed.
Helmut Lütkepohl
5. VAR Processes with Parameter Constraints
Abstract
In Chapter 3, we have discussed estimation of the parameters of a K-dimensional stationary, stable VAR(p) process of the form
$$ y_t = \nu + A_1 y_{t - 1} + \ldots + A_p y_{t - p} + u_t , $$
(5.1.1)
where all the symbols have their usual meanings. In the investment/income/consumption example considered throughout Chapter 3, we found that many of the coefficient estimates were not significantly different from zero. This observation may be interpreted in two ways. First, some of the coefficients may actually be zero and this fact may be reflected in the estimation results. For instance, if some variable is not Granger-causal for the remaining variables, zero coefficients are encountered. Second, insignificant coefficient estimates are found if the information in the data is not rich enough to provide sufficiently precise estimates with confidence intervals that do not contain zero
Helmut Lütkepohl

Cointegrated Processes

Frontmatter
6. Vector Error Correction Models
Abstract
As defined in Chapter 2, a process is stationary if it has time invariant first and second moments. In particular, it does not have trends or changing variances. A VAR process has this property if the determinantal polynomial of its VAR operator has all its roots outside the complex unit circle. Clearly, stationary processes cannot capture some main features of many economic time series. For example, trends (trending means) are quite common in practice. For instance, the original investment, income, and consumption data used in many previous examples have trends (see Figure 3.1). Thus, if interest centers on analyzing the original variables (or their logarithms) rather than the rates of change, it is necessary to have models that accommodate the nonstationary features of the data. It turns out that a VAR process can generate stochastic and deterministic trends if the determinantal polynomial of the VAR operator has roots on the unit circle. In fact, it is even sufficient to allow for unit roots (roots for z = 1) to obtain a trending behavior of the variables. We will consider this case in some detail in this chapter. In the next section, the effect of unit roots in the AR operator of a univariate process will be analyzed. Variables generated by such processes are called integrated variables and the underlying generating processes are integrated processes. Vector processes with unit roots are considered in Section 6.2. In these processes, some of the variables can have common trends so that they move together to some extent.
Helmut Lütkepohl
7. Estimation of Vector Error Correction Models
Abstract
In this chapter, estimation of VECMs is discussed. The asymptotic properties of estimators for nonstationary models differ in important ways from those of stationary processes. Therefore, in the first section, a simple special case model with no lagged differences and no deterministic terms is considered and different estimation methods for the parameters of the error correction term are treated. For this simple case, the asymptotic properties can be derived with a reasonable amount of effort and the difference to estimation in stationary models can be seen fairly easily. Therefore it is useful to treat this case in some detail. The results can then be extended to more general VECMs which are considered in Section 7.2. In Section 7.3, Bayesian estimation including the Minnesota or Litterman prior for integrated processes is discussed and forecasting and structural analysis based on estimated processes are considered in Sections 7.4–7.6.
Helmut Lütkepohl
8. Specification of VECMs
Abstract
In specifying VECMs, the lag order, the cointegration rank and possibly further restrictions have to be determined. The lag order and the cointegration rank are typically determined before further restrictions are imposed on the parameter matrices. Moreover, the specification of a VECM usually starts by determining a suitable lag length because, in choosing the lag order, the cointegration rank does not have to be known, whereas many procedures for specifying the cointegration rank require knowledge of the lag order. Therefore, in the following, we will first discuss the lag order choice (Section 8.1) and then consider procedures for determining the cointegration rank (Section 8.2). We will comment on subset modelling in a VECM framework in Section 8.3 and, in Section 8.4, we will discuss checking the adequacy of such models. More precisely, residual autocorrelation analysis, testing for nonnormality and structural change are dealt with.
Helmut Lütkepohl

Structural and Conditional Models

Frontmatter
9. Structural VARs and VECMs
Abstract
In Chapters 2 and 6, we have seen that, on the one hand, impulse responses are an important tool to uncover the relations between the variables in a VAR or VECM and, on the other hand, there are some obstacles in their interpretation. In particular, impulse responses are generally not unique and it is often not clear which set of impulse responses actually reflects the ongoings in a given system. Because the different sets of impulses can be computed from the same underlying VAR or VECM, it is clear that nonsample information has to be used to decide on the proper set for a particular given model. In econometric terminology, VARs are reduced form models and structural restrictions are required to identify the relevant innovations and impulse responses. In this chapter, different possible restrictions that have been proposed in the literature will be considered.
Helmut Lütkepohl
10. Systems of Dynamic Simultaneous Equations
Abstract
This chapter serves to point out some possible extensions of the models considered so far and to draw attention to potential problems related to such extensions. So far, we have assumed that all stochastic variables of a system have essentially the same status in that they are all determined within the system. In other words, the model describes the joint generation process of all the observable variables of interest. In practice, the generation process may be affected by other observable variables which are determined outside the system of interest. Such variables are called exogenous or unmodelled variables. In contrast, the variables determined within the system are called endogenous. Although deterministic terms can be included in the set of unmodelled variables, we often have stochastic variables in mind in this category. For instance, weather related variables such as rainfall or hours of sunshine are usually regarded as stochastic exogenous variables. As another example of the latter type of variables, if a small open economy is being studied, the price level or the output of the rest of the world may be regarded as exogenous. A model which specifies the generation process of some variables conditionally on some other unmodelled variables is sometimes called a conditional or {partial model} because it describes the generation process of a subset of the variables only.
Helmut Lütkepohl

Infinite Order Vector Autoregressive Processes

Frontmatter
11. Vector Autoregressive Moving Average Processes
Abstract
In this chapter, we extend our standard finite order VAR model,
$$y_t = \nu + A_1 y_{t - 1} + \ldots + A_p y_{t - p} + \varepsilon _t , $$
by allowing the error terms, here εt, to be autocorrelated rather than white noise. The autocorrelation structure is assumed to be of a relatively simple type so that εt has a finite order moving average (MA) representation,
$$\varepsilon _t = u_t + M_1 u_{t - 1} + \ldots + M_q u_{t - q} ,$$
where, as usual, u t is zero mean white noise with nonsingular covariance matrix Σu. A finite order VAR process with finite order MA error term is called a VARMA (vector autoregressive moving average) process.
Helmut Lütkepohl
12. Estimation of VARMA Models
Abstract
In this chapter, maximum likelihood estimation of the coefficients of a VARMA model is considered. Before we can proceed to the actual estimation, a unique set of parameters must be specified. In this context, the problem of nonuniqueness of a VARMA representation becomes important. This identification problem, that is, the problem of identifying a unique structure among many equivalent ones, is treated in Section 12.1. In Section 12.2, the Gaussian likelihood function of a VARMA model is considered. A numerical algorithm for maximizing it and, thus, for computing the actual estimates is discussed in Section 12.3. The asymptotic properties of the ML estimators are the subject of Section 12.4. Forecasting with estimated processes and impulse response analysis are dealt with in Sections 12.5 and 12.6, respectively.
Helmut Lütkepohl
13. Specification and Checking the Adequacy of VARMA Models
Abstract
A great number of strategies has been suggested for specifying VARMA models. There is not a single one that has become a standard like the Box & Jenkins (1976) approach in the univariate case. None of the multivariate procedures is in widespread use for modelling moderate or high-dimensional economic time series. Some are mainly based on a subjective assessment of certain characteristics of a process such as the autocorrelations and partial autocorrelations. A decision on specific orders and constraints on the coefficient matrices is then based on these quantities. Other methods rely on a mixture of statistical testing, use of model selection criteria and personal judgement of the analyst. Again other procedures are based predominantly on statistical model selection criteria and, in principle, they could be performed automatically by a computer. Automatic procedures have the advantage that their statistical properties can possibly be derived rigorously. In actual applications, some kind of mixture of different approaches is often used. In other words, the expertise and prior knowledge of an analyst will usually not be abolished in favor of purely statistical procedures. Models suggested by different types of criteria and procedures will be judged and evaluated by an expert before one or more candidates are put to a specific use such as forecasting. The large amount of information in a number of moderately long time series makes it usually necessary to condense the information considerably before essential features of a system become visible.
Helmut Lütkepohl
14. Cointegrated VARMA Processes
Abstract
So far, we have concentrated on stationary VARMA processes for I(0) variables. In this chapter, the variables are allowed to be I(1) and may be cointegrated. As we have seen in Chapter 12, one of the problems in dealing with VARMA models is the nonuniqueness of their parameterization. For inference purposes, it is necessary to focus on a unique representation of a DGP. For stationary VARMA processes, we have considered the echelon form to tackle the identification problem. In the next section, this representation of a VARMA process will be combined with the error correction (EC) form. Thereby it is again possible to separate the long-run cointegration relations from the short-term dynamics. The resulting representation turns out to be a convenient framework for modelling cointegrated variables.
Helmut Lütkepohl
15. Fitting Finite Order VAR Models to Infinite Order Processes
Abstract
In the previous chapters, we have derived properties of models, estimators, forecasts, and test statistics under the assumption of a true model. We have also argued that such an assumption is virtually never fulfilled in practice. In other words, in practice, all we can hope for is a model that provides a useful approximation to the actual data generation process of a given multiple time series. In this chapter, we will, to some extent, take into account this state of affairs and assume that an approximating rather than a true model is fitted. Specifically, we assume that the true data generation process is an infinite order VAR process and, for a given sample size T, a finite order VAR(p) is fitted to the data.
Helmut Lütkepohl

Time Series Topics

Frontmatter
16. Multivariate ARCH and GARCH Models
Abstract
In the previous chapters, we have discussed modelling the conditional mean of the data generation process of a multiple time series, conditional on the past at each particular time point. In that context, the variance or covariance matrix of the conditional distribution was assumed to be time invariant. In fact, in much of the discussion, the residuals or forecast errors were assumed to be independent white noise. Such a simplification is useful and justified in many applications.
Helmut Lütkepohl
17. Periodic VAR Processes and Intervention Models
Abstract
In Part II of the book, we have considered cointegrated VAR models and we have seen that they give rise to nonstationary processes with potentially time varying first and second moments. Yet the models have time invariant coefficients. Nonstationarity, that is, time varying first and/or second moments of a process, can also be modelled in the framework of time varying coefficient processes. Suppose, for instance, that the time series under consideration show a seasonal pattern.
Helmut Lütkepohl
18. State Space Models
Abstract
State space models may be regarded as generalizations of the models considered so far. They have been used extensively in system theory, the physical sciences, and engineering. The terminology is therefore largely from these fields. The general idea behind these models is that an observed (multiple) time series y 1 ,…, y T depends upon a possibly unobserved state z t which is driven by a stochastic process. The relation between y t and z t is described by the observation or measurement equation
$$ y_t = H_t z_t + v_t , $$
(18.1.1)
where H t is a matrix that may also depend on the period of time, t, and v t is the observation error which is typically assumed to be a noise process.
Helmut Lütkepohl
Backmatter
Metadaten
Titel
New Introduction to Multiple Time Series Analysis
verfasst von
Professor Dr. Helmut Lütkepohl
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-27752-1
Print ISBN
978-3-540-40172-8
DOI
https://doi.org/10.1007/978-3-540-27752-1