Asymptotics for out of sample tests of Granger causality
Introduction
Evaluating a time series models’ ability to forecast is one method of determining its usefulness. Swanson (1998), Sullivan et al. (1999), Lettau and Ludvigson (2001), Rapach and Wohar (2002) and Hong and Lee (2003) are just a few examples of applications that have determined the appropriateness of a model based on its ability to predict out-of-sample. When using this methodology a model is determined to be valuable if the resulting forecast errors are deemed small relative to some loss function. Typically this loss function is squared error though others such as absolute error and directional accuracy have been used by Leitch and Tanner (1991) and Breen et al. (1989), respectively. This methodology is in contrast to traditional methods that determine quality of the predictive model based on its ability to replicate or “fit” the same realizations used to estimate the model (see Inoue and Kilian, 2004).
This paper contributes to recent analytical work on out-of-sample model evaluation, specifically that of West (1996), by providing asymptotic results for out-of-sample tests of Granger causality—tests that compare the predictive ability of two nested models allowing for a wide range of loss functions including, but not limited to, squared error. Null asymptotic distributions are derived for three commonly used tests that compare the out-of-sample predictive ability of two nested models: a regression-based test for equal mean squared error (MSE) proposed by Granger and Newbold (1977), a similar t-type test commonly attributed to either Diebold and Mariano (1995) or West (1996), and an F-type test similar in spirit to Theil's U but perhaps closer to in-sample likelihood ratio tests. Since the asymptotic distributions of the former two tests are identical they will be referenced simultaneously as “OOS-t” tests; the latter test will be referenced as an “OOS-F” test.
The asymptotic null distributions of both the OOS-t and OOS-F tests are nonstandard. Each can be written as functions of stochastic integrals of Brownian motion. Tables are provided in order to facilitate the use of these distributions. Monte Carlo evidence provided in Section 4 suggests that the tests can be well sized with good power in samples of the size often found in economic applications. Supporting Monte Carlo evidence can also be found in Clark and McCracken, 2001, Clark and McCracken, 2005a, Clark and McCracken, 2005b as well as Inoue and Kilian (2004).
The remainder of the paper proceeds as follows. Section 2 provides a brief background on out-of-sample methods with an emphasis on nested model comparisons. Section 3 and its subsections provide notation, assumptions and theorems regarding the null asymptotics of the test statistics. Section 4 provides Monte Carlo evidence on the size and power of the tests when accuracy is measured using square loss. Section 5 provides empirical evidence on the predictive content of the Chicago Fed's National Activity Index for growth in Industrial Production and core PCE-based inflation. Section 6 concludes. All proofs are presented within Appendix A.
Section snippets
Background
In recent work, West (1996) shows how to construct asymptotically valid out-of-sample tests of predictive ability when forecasts are generated using estimated parameters. Conditions are provided under which t-type statistics will be asymptotically standard normal. These conditions extend and clarify previous analytical work on out-of-sample inference made by Mincer and Zarnowitz (1969), Chong and Hendry (1986), Ghysels and Hall (1990), Fair and Shiller, 1989, Fair and Shiller, 1990, Mizrach
Theoretical results
This section provides the null asymptotic distributions of both the OOS-t tests in (1) and (2) and the OOS-F test in (3). It does so in three subsections. Section 3.1 presents the environment and assumptions. Section 3.2 presents the asymptotic null distributions of the statistics (1)–(3). Section 3.3 provides a discussion of the asymptotic results as well as tables of asymptotically valid critical values.
Monte Carlo evidence
In this section we provide simulation evidence on the finite sample size and power of the OOS-t and OOS-F tests discussed in Section 3. For brevity, attention is restricted to the commonly used squared loss function and when the recursive scheme is used. For each of the MSE-t and MSE-F tests, and under both the null and alternative we conduct the test twice; once using the critical values implied by the theory when and once using those when . For the sake of comparison, we also provide
Empirical evidence
In this section we follow a number of papers including Stock and Watson (2002) and Shintani (2005) that ask whether diffusion indices are useful for forecasting macroeconomic aggregates. In particular, as in Brave and Fisher (2004) and Clark and McCracken (2005c) we consider whether the CFNAI provides marginal predictive content for monthly growth in industrial production (IP) and monthly changes in core PCE-based inflation (PI). Following the theory in this paper, we focus on short term, one
Conclusion
In this paper we provide the null asymptotic distributions of three statistics commonly used to test for equal predictive ability between two nested parametric regression models. The asymptotic null distributions of these three statistics are non-standard. Numerically, calculated critical values are provided so that asymptotically valid tests of equal predictive ability can be constructed. Monte Carlo and empirical evidence suggests that the critical values can be used to provide accurately
Acknowledgements
We gratefully acknowledge the helpful comments of: Todd Clark, Ken West, Lutz Kilian, Norm Swanson, Bruce Hansen, Dek Terrell, the Associate Editor and two anonymous referees; seminar participants at UNLV, Queens University, University of Western Ontario, LSU, University of Missouri, 2000 Midwest Econometrics Group, 2000 World Congress of the Econometric Society and the 2000 NBER Summer Institute seminar on forecasting and time series methods. We would also like to thank LSU and the University
References (56)
A new technique for postsample model selection and validation
Journal of Economic Dynamics and Control
(1998)- et al.
Tests of equal forecast accuracy and encompassing for nested models
Journal of Econometrics
(2001) - et al.
The power of tests of predictive ability in the presence of structural breaks
Journal of Econometrics
(2005) - et al.
A consistent test for nonlinear out of sample predictive accuracy
Journal of Econometrics
(2002) - et al.
Predictive ability with cointegrated variables
Journal of Econometrics
(2001) Robust out of sample inference
Journal of Econometrics
(2000)- et al.
Empirical exchange rate models of the seventies: do they fit out of sample?
Journal of International Economics
(1983) The distribution of the theil U-statistic in bivariate normal populations
Economic Letters
(1992)- et al.
Alternative models for conditional stock volatility
Journal of Econometrics
(1990) - et al.
Testing the monetary model of exchange rate determination: new evidence from a century of data
Journal of International Economics
(2002)
Advertising and aggregate consumption: an analysis of causality
Econometrica
In search of a robust inflation forecast, economic perspectives (federal reserve bank of Chicago)
Fourth Quarter
Economic significance of predictable variations in stock index returns
Journal of Finance
An out of sample test for granger causality
Macroeconomic Dynamics
Econometric evaluation of linear macro-economic models
Review of Economic Studies
Can out-of-sample forecast comparisons help prevent overfitting?
Journal of Forecasting
Evaluating long horizon forecasts
Econometric Reviews
The predictive content of the output gap for inflation: resolving in–sample and out–of–sample evidence
Journal of Money, Credit, and Banking
Why forecast performance does not help us choose a model
Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes
International Economic Review
Stochastic Limit Theory
Comparing predictive accuracy
Journal of Business and Economic Statistics
Nonparametric exchange rate prediction?
Journal of International Economics
Evaluating density forecasts, with applications to financial risk management
International Economic Review
The informational content of ex ante forecasts
Review of Economics and Statistics
Comparing information in forecasts from econometric models
American Economic Review
A test for structural stability of euler conditions with parameters estimated via the generalized method of moments estimator
International Economic Review
Cited by (436)
Predictive ability tests with possibly overlapping models
2024, Journal of EconometricsCommodities and Policy Uncertainty Channel(s)
2024, International Review of Economics and FinanceComparing forecasting performance in cross-sections
2023, Journal of EconometricsCitation Excerpt :More recently, Diebold and Mariano (1995) and West (1996) develop tests for comparing the null of equal predictive accuracy. Clark and McCracken (2001) and McCracken (2007) focus on comparisons of predictive accuracy for forecasts that are generated by nested models, while accounting for the effect of recursive updating in the parameter estimates used to generate forecasts. Giacomini and White (2006) propose a test of equal predictive accuracy that accounts for the presence of non-vanishing parameter estimation error and develop methods for conditional forecast comparisons.
Firm-level business uncertainty and the predictability of the aggregate U.S. stock market volatility during the COVID-19 pandemic
2023, Quarterly Review of Economics and FinanceThe role of the past long-run oil price changes in stock market
2023, International Review of Economics and Finance