2001 | OriginalPaper | Chapter
Monte Carlo Investigation
Author : Dr. Joachim Inkmann
Published in: Conditional Moment Estimation of Nonlinear Equation Systems
Publisher: Springer Berlin Heidelberg
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
With the first Monte Carlo experiment (which is extracted from Inkmann, 2000) it is attempted to provide evidence on the small sample performance of three estimators which are efficient in three different classes of estimators using an increasing amount of distributional information. The first estimator is the conventional two-step GMM estimator, labeled GMM2 from now on, using the estimator (7.1.1) of the optimal weight matrix. It has been shown in Section 5.2 that this estimator reaches the efficiency bound Λu for a given set of unconditional moment functions. The second estimator under consideration results from using the GMM2 estimator as an initial estimator for the estimation of the unknown optimal instruments. The three-step estimator, GMM3, which is based on these optimal instruments attains the efficiency bound Λc for a given set of conditional moment functions. Because conditional moment restrictions imply an infinite set of orthogonality conditions, the asymptotic efficiency advantage of GMM3 is achieved by imposing a stronger distributional assumption. For the estimation of the optimal instruments the K-nearest neighbor approach presented in Section 8.3 is chosen which is particularly simple to implement. The third estimator is a maximum likelihood estimator which requires a specification of the complete conditional distribution and achieves the efficiency bound in the class of parametric estimators. Therefore the ML estimator can be regarded as a benchmark for the two GMM estimators.