2012 | OriginalPaper | Chapter
Theory of Estimation
Authors : Wolfgang Karl Härdle, Léopold Simar
Published in: Applied Multivariate Statistical Analysis
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
We know from our basic knowledge of statistics that one of the objectives in statistics is to better understand and model the underlying process which generates data. This is known as statistical inference: we infer from information contained in sample properties of the population from which the observations are taken. In multivariate statistical inference, we do exactly the same. The basic ideas were introduced in Section
4.5
on sampling theory: we observed the values of a multivariate random variable
X
and obtained a sample
${\mathcal{X}}=\{x_{i}\}_{i=1}^{n}$
. Under random sampling, these observations are considered to be realisations of a sequence of i.i.d. random variables
X
1
,…,
X
n
where each
X
i
is a
p
-variate random variable which replicates the
parent
or
population
random variable
X
. In this chapter, for notational convenience, we will no longer differentiate between a random variable
X
i
and an observation of it,
x
i
, in our notation. We will simply write
x
i
and it should be clear from the context whether a random variable or an observed value is meant.