Skip to main content
Top

2001 | OriginalPaper | Chapter

Computation of GMM Estimators

Author : Dr. Joachim Inkmann

Published in: Conditional Moment Estimation of Nonlinear Equation Systems

Publisher: Springer Berlin Heidelberg

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

For both consistency and asymptotic normality of the GMM estimator it is not necessary to assume that $$\hat \theta $$ precisely minimizes the GMM objective function (2.1.6). Andrews (1997) points out that for Theorem 2 (consistency) $$\hat \theta $$ is required to be within Op(1) of the global minimum and for Theorem 3 (asymptotic normality), $$\hat \theta $$ is required to be within Op(n-0.5), where Xn = Op(an) conveniently abbreviates plimXn/an =0 (cf. Amemiya, 1985, p. 89). The estimator $$\hat \theta $$ is usually obtained by iterative numerical optimization methods like the Newton-Raphson algorithm (cf. Amemiya, 1985, ch. 4.4). Starting from any value of the parameter space this procedure produces a sequence of estimates $$\hat \theta $$j ( j = 0,1,2,…) which hopefully converges to the global minimum of the objective function. A typical Newton-Raphson iteration to the solution of the minimization problem (2.1.6) has the form 4.1.1$$ \tilde \theta _{j + 1} = \tilde \theta _j - \left[ {\left( {\tfrac{1} {n}\sum\limits_{i = 1}^n {G\left( {Z_i ,\tilde \theta _j } \right)} } \right)^\prime \hat W\left( {\tfrac{1} {n}\sum\limits_{i = 1}^n {G\left( {Z_i ,\tilde \theta _j } \right)} } \right)} \right]^{ - 1} x \left( {\tfrac{1} {n}\sum\limits_{i = 1}^n {G\left( {Z_i ,\tilde \theta _j } \right)} } \right)^\prime \hat W\left( {\tfrac{1} {n}\sum\limits_{i = 1}^n {G\left( {Z_i ,\tilde \theta _j } \right)} } \right) $$ Convergence to a global minimum is ensured by this algorithm if the objective function is convex which, however, should be the exception for many nonlinear models encountered in microeconometric applications as discussed in the previous chapter. Otherwise the iteration routines could run into a local minimum which renders the parameter estimators inconsistent and alters their asymptotic distribution. To circumvent this problem Andrews (1997) proposes an optimization algorithm which guarantees consistency and asymptotic normality of the resulting GMM estimators provided that r > q holds. Andrews’ method is described in detail in the next section.

Metadata
Title
Computation of GMM Estimators
Author
Dr. Joachim Inkmann
Copyright Year
2001
Publisher
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-642-56571-7_4