Skip to main content

2006 | Buch

Modern Econometric Analysis

Surveys on Recent Developments

herausgegeben von: Professor Dr. Olaf Hübler, Professor Dr. Jachim Frohn

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

The importance of empirical economics and econometric methods has greatly in­ creased during the last 20 years due to the availability of better data and the improved performance of computers. In an information-driven society such as ours we need quickly to obtain complete and convincing statistical results. This is only possible if the appropriate econometric methods are applied. Traditional econometric analysis concentrates on classical methods which are far from suitable for handling actual economic problems. They can only be used as a starting point for students to learn basic econometrics and as a reference point for more advanced methods. Modern Econometrics tries to develop new approaches from an economic perspective. A consequence is that we have less of a unified econometric theory than in former times. Specific branches which require specific methods have been established. Nowadays, nobody has complete knowledge of every area of econometrics. If someone is interested to learn more about a field, relatively unknown to them, they will require support.

Inhaltsverzeichnis

Frontmatter
1. Developments and New Dimensions in Econometrics
Abstract
This book presents 14 papers with surveys on the development and new topics in econometrics. The articles aim to demonstrate how German econometricians see the discipline from their specific view. They briefly describe the main strands and emphasize some recent methods.
Olaf Hübler, Joachim Frohn
2. On the Specification and Estimation of Large Scale Simultaneous Structural Models
Abstract
This paper surveys the state of the art of the analysis and application of large scale structural simultaneous econometric models (SSEM). First, the importance of such models in empirical economics and especially for economic policy analysis is emphasized. We then focus on the methodological issues in the application of these models like questions about identification, nonstationarity of variables, adequate estimation of the parameters, and the inclusion of identities.
In the light of the latest development in econometrics, we identify the main unsolved problems in this area, recommend a combined data-theory-driven procedure for the specification of such models, and give suggestions how one could overcome some of the indicated problems.
Pu Chen, Joachim Frohn
3. Dynamic Factor Models
Abstract
Factor models can cope with many variables without running into scarce degrees of freedom problems often faced in a regression-based analysis. In this article we review recent work on dynamic factor models that have become popular in macroeconomic policy analysis and forecasting. By means of an empirical application we demonstrate that these models turn out to be useful in investigating macroeconomic problems.
Jörg Breitung, Sandra Eickmeier
4. Unit Root Testing
Abstract
The occurrence of unit roots in economic time series has far reaching consequences for univariate as well as multivariate econometric modelling. Therefore, unit root tests are nowadays the starting point of most empirical time series studies. The oldest and most widely used test is due to Dickey and Fuller (1979). Reviewing this test and variants thereof we focus on the importance of modelling the deterministic component. In particular, we survey the growing literature on tests accounting for structural shifts. Finally, further applied aspects are addressed, for instance, how to get the size correct and obtain good power at the same time.
Jürgen Wolters, Uwe Hassler
5. Autoregressive Distributed Lag Models and Cointegration
Abstract
This paper considers cointegration analysis within an autoregressive distributed lag (ADL) framework. First, different reparameterizations and interpretations are reviewed. Then we show that the estimation of a cointegrating vector from an ADL specification is equivalent to that from an error-correction (EC) model. Therefore, asymptotic normality available in the ADL model under exogene-ity carries over to the EC estimator. Next, we review cointegration tests based on EC regressions. Special attention is paid to the effect of linear time trends in case of regressions without detrending. Finally, the relevance of our asymptotic results in finite samples is investigated by means of computer experiments. In particular, it turns out that the conditional EC model is superior to the unconditional one.
Uwe Hassler, Jürgen Wolters
6. Structural Vector Autoregressive Analysis for Cointegrated Variables
Abstract
Vector autoregressive (VAR) models are capable of capturing the dynamic structure of many time series variables. Impulse response functions are typically used to investigate the relationships between the variables included in such models. In this context the relevant impulses or innovations or shocks to be traced out in an impulse response analysis have to be specified by imposing appropriate identifying restrictions. Taking into account the cointegration structure of the variables offers interesting possibilities for imposing identifying restrictions. Therefore VAR models which explicitly take into account the cointegration structure of the variables, so-called vector error correction models, are considered. Specification, estimation and validation of reduced form vector error correction models is briefly outlined and imposing structural short- and long-run restrictions within these models is discussed.
Helmut Lütkepohl
7. Econometric Analysis of High Frequency Data
Abstract
Owing to enormous advances in data acquisition and processing technology the study of high (or ultra) frequency data has become an important area of econometrics. At least three avenues of econometric methods have been followed to analyze high frequency financial data: Models in tick time ignoring the time dimension of sampling, duration models specifying the time span between transactions and, finally, fixed time interval techniques. Starting from the strong assumption that quotes are irregularly generated from an underlying exogeneous arrival process, fixed interval models promise feasibility of familiar time series techniques. Moreover, fixed interval analysis is a natural means to investigate multivariate dynamics. In particular, models of price discovery are implemented in this venue of high frequency econometrics. Recently, a sound statistical theory of ‘realized volatility’ has been developed. In this framework high frequency log price changes are seen as a means to observe volatility at some lower frequency.
Helmut Herwartz
8. Using Quantile Regression for Duration Analysis
Abstract
Quantile regression methods are emerging as a popular technique in econometrics and biometrics for exploring the distribution of duration data. This paper discusses quantile regression for duration analysis allowing for a flexible specification of the functional relationship and of the error distribution. Censored quantile regression addresses the issue of right censoring of the response variable which is common in duration analysis. We compare quantile regression to standard duration models. Quantile regression does not impose a proportional effect of the covariates on the hazard over the duration time. However, the method cannot take account of time-varying covariates and it has not been extended so far to allow for unobserved heterogeneity and competing risks. We also discuss how hazard rates can be estimated using quantile regression methods.
Bernd Fitzenberger, Ralf A. Wilke
9. Multilevel and Nonlinear Panel Data Models
Abstract
This paper presents a selective survey on panel data methods. The focus is on new developments. In particular, linear multilevel models, specific nonlinear, nonparametric and semiparametric models are at the center of the survey. In contrast to linear models there do not exist unified methods for nonlinear approaches. In this case conditional maximum likelihood methods dominate for fixed effects models. Under random effects assumptions it is sometimes possible to employ conventional maximum likelihood methods using Gaussian quadrature to reduce a T-dimensional integral. Alternatives are generalized methods of moments and simulated estimators. If the nonlinear function is not exactly known, nonparametric or semiparametric methods should be preferred.
Olaf Hübler
10. Nonparametric Models and Their Estimation
Abstract
Nonparametric models have become more and more popular over the last two decades. One reason for their popularity is software availability, which easily allows to fit smooth but otherwise unspecified functions to data. A benefit of the models is that the functional shape of a regression function is not prespecified in advance, but determined by the data. Clearly this allows for more insight which can be interpreted on a substance matter level.
This paper gives an overview of available fitting routines, commonly called smoothing procedures. Moreover, a number of extensions to classical scatterplot smoothing are discussed, with examples supporting the advantages of the routines.
Göran Kauermann
11. Microeconometric Models and Anonymized Micro Data
Abstract
The paper first provides a short review of the most common microeconometric models including logit, probit, discrete choice, duration models, models for count data and Tobit-type models. In the second part we consider the situation that the micro data have undergone some anonymization procedure which has become an important issue since otherwise confidentiality would not be guaranteed. We shortly describe the most important approaches for data protection which also can be seen as creating errors of measurement by purpose. We also consider the possibility of correcting the estimation procedure while taking into account the anonymization procedure. We illustrate this for the case of binary data which are anonymized by ‘post-randomization’ and which are used in a probit model. We show the effect of ‘naive’ estimation, i. e. when disregarding the anonymization procedure. We also show that a ‘corrected’ estimate is available which is satisfactory in statistical terms. This is also true if parameters of the anonymization procedure have to be estimated, too.
Gerd Ronning
12. Ordered Response Models
Abstract
We discuss regression models for ordered responses, such as ratings of bonds, schooling attainment, or measures of subjective well-being. Commonly used models in this context are the ordered logit and ordered probit regression models. They are based on an underlying latent model with single index function and constant thresholds. We argue that these approaches are overly restrictive and preclude a flexible estimation of the effect of regressors on the discrete outcome probabilities. For example, the signs of the marginal probability effects can only change once when moving from the smallest category to the largest one. We then discuss several alternative models that overcome these limitations. An application illustrates the benefit of these alternatives.
Stefan Boes, Rainer Winkelmann
13. Some Recent Advances in Measurement Error Models and Methods
Abstract
A measurement error model is a regression model with (substantial) measurement errors in the variables. Disregarding these easurement errors in estimating the regression parameters results in asymptotically biased estimators. Several methods have been roposed to eliminate, or at least to reduce, this bias, and the relative efficiency and robustness of these methods have been compared. The aper gives an account of these endeavors. In another context, when data are of a categorical nature, classification errors play a similar role as easurement errors in continuous data. The paper also reviews some recent advances in this field.
Hans Schneeweiß, Thomas Augustin
14. The Microeconometric Estimation of Treatment Effects — An Overview
Abstract
The need to evaluate the performance of active labour market policies is not questioned any longer. Even though OECD countries spend significant shares of national resources on these measures, unemployment rates remain high or even increase. We focus on microeconometric evaluation which has to solve the fundamental evaluation problem and overcome the possible occurrence of selection bias. When using non-experimental data, different evaluation approaches can be thought of. The aim of this paper is to review the most relevant estimators, discuss their identifying assumptions and their (dis-)advantages. Thereby we will present estimators based on some form of exogeneity (selection on observables) as well as estimators where selection might also occur on unobservable characteristics. Since the possible occurrence of effect heterogeneity has become a major topic in evaluation research in recent years, we will also assess the ability of each estimator to deal with it. Additionally, we will also discuss some recent extensions of the static evaluation framework to allow for dynamic treatment evaluation.
Marco Caliendo, Reinhard Hujer
15. Survey Item Nonresponse and its Treatment
Abstract
One of the most salient data problems empirical researchers face is the lack of informative responses in survey data. This contribution briefly surveys the literature on item nonresponse behavior and its determinants before it describes four approaches to address item nonresponse problems: Casewise deletion of observations, weighting, imputation, and model-based procedures. We describe the basic approaches, their strengths and weaknesses and illustrate some of their effects using a simulation study. The paper concludes with some recommendations for the applied researcher.
Susanne Rässler, Regina T. Riphahn
Metadaten
Titel
Modern Econometric Analysis
herausgegeben von
Professor Dr. Olaf Hübler
Professor Dr. Jachim Frohn
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32693-9
Print ISBN
978-3-540-32692-2
DOI
https://doi.org/10.1007/3-540-32693-6