2001 | OriginalPaper | Chapter
The Conditional Moment Approach to GMM Estimation
Author : Dr. Joachim Inkmann
Published in: Conditional Moment Estimation of Nonlinear Equation Systems
Publisher: Springer Berlin Heidelberg
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Let Z be a random vector which includes both endogenous and explanatory variables. Suppose the data {Zi : i = 1,···, n } consists of n independent draws from the probability distribution of Z. Assume the equation system of interest can be represented by a s × 1 residual vector ρ(Z,θ)= (ρ1(Z,θ),ρ2(Z,θ),···,ρs(Z,θ))′ whose elements are possibly nonlinear functions of an unknown q×1 parameter vector θ. In the following ρ(Z,θ) will be referred to as the vector of conditional moment functions. The conditional moment estimation principle rests on the assumption that the probability distribution of Z satisfies the conditional moment restrictions 2.1.1 $$ E\left[ {\rho \left( {Z,{\theta _0}} \right)|X} \right] = 0, $$ where θ0 denotes the population parameter vector to be estimated and X a vector of conditioning variables or, equivalently, instruments. This assumption states that each residual is orthogonal to all instruments in the conditional mean sense. Eventually, the following set of weaker conditional moment restrictions will be imposed 2.1.2 $$ E\left[ {{\rho _1}\left( {Z,{\theta _0}} \right)|{X_1}} \right] = 0,for1 = 1, \cdots ,s $$ where X1 is a subvector of X having instruments for equation 1 which can be correlated with the other equations’ residuals. Whenever (2.1.2) is assumed to hold in the following it is implicitly assumed that at least in one equation X1 is a proper subvector of X because otherwise (2.1.1) and (2.1.2) are completely equivalent.