2001 | OriginalPaper | Chapter
Asymptotic Efficiency Bounds
Author : Dr. Joachim Inkmann
Published in: Conditional Moment Estimation of Nonlinear Equation Systems
Publisher: Springer Berlin Heidelberg
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Any consistent and asymptotically normal estimator with a variance-covariance matrix of the stabilizing transformation attaining the Cramér-Rao efficiency bound is said to be asymptotically efficient (cf. Amemiya, 1985, p. 124). It is well known that the Cramér-Rao bound is given by the inverse of the information matrix. Throughout this chapter, let J(θ0) denote the information matrix for a single observation, evaluated at the true parameter vector, defined as 5.1.1$$ J\left( {\theta _0 } \right) \equiv - E\left[ {\frac{{\partial ^2 \ln f\left( {Z|\theta _0 } \right)}} {{\partial \theta \partial \theta '}}} \right], $$ where ∂2 lnf (z | θ )/∂θ∂θ′ is the Hessian matrix for a single observation containing the second derivatives of its loglikelihood contribution ln f(z | θ). Let S(θ) ≡ ∂ln f (z | θ)/∂θ denote the vector of first derivatives of the loglikelihood contribution of a single observation, henceforth referred to as the score. Using the information matrix equality at the individual level, (5.1.1) can be rewritten as 5.1.2$$ J\left( {\theta _0 } \right) = E\left[ {S(\theta _0 )S(\theta _0 )'} \right] = V\left[ {S(\theta _0 )} \right], $$ which will be more convenient for the results stated in the following two sections.