2013  OriginalPaper  Chapter
Hint
Swipe to navigate through the chapters of this book
Published in:
Mathematical Statistics for Economics and Business
Prior to this point, our study of probability theory and its implications has essentially addressed questions of deduction, being of the type: “Given a probability space, what can we deduce about the characteristics of outcomes of an experiment?” Beginning with this chapter, we turn this question around, and focus our attention on statistical inference and questions of the form: “Given characteristics associated with the outcomes of an experiment, what can we infer about the probability space?”
Please log in to get access to this content
To get access to this content you need the following product:
Advertisement
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
More formally, the term
stochastic process refers to any collection of random variables indexed by some index set
T, i.e., {X
_{ t },
t ∈
T} is a stochastic process.
This follows straightforwardly from the definitions of marginal and conditional density functions, as the reader should verify.
These conditions, coupled with the condition that
x ∈ {0,1}, ensure that the denominators in the density expressions are positive and that the numerators are nonnegative.
An alternative proof, requiring only that moments up to the
rthorder exist, can be based on Khinchine’s WLLN. Although we will not use the property later, the reader can utilize Kolmogorov’s SLLN to also demonstrate that
\( {{{{M^{\prime}}}_r}} \)
\( {\mathop{\to}\limits^{\rm{as}} } \)
\( {{{{\mu ^{\prime}}}_r}} \) (see
Chapter 5).
Some authors define the sample variance as
\( S_n^2 = \left( {n/\left( {n  1} \right)} \right){{M}_2}, \) so that E
\( \left( {S_n^2} \right) = {{\sigma}^2} \), which identifies
\( S_n^2 \) as an
unbiased estimator of
σ
^{2} (see Section
7.2). However, this definition would be inconsistent with the aforementioned fact that
M
_{2}, and not
\( \left( {n/\left( {n  1} \right)} \right){{M}_2} \), is the second moment about the mean, and thus the
variance, of the sample or empirical distribution function,
\( {{\hat{F}}_n} \).
This is an example of a
joint sample moment about the origin, the general definition being given by
\( {{M^{\prime}}_{{r,s}}} = \left( {1/n} \right)\sum\nolimits_{{i = 1}}^n {X_i^rY_i^s}. \) The definition for the case of
joint sample moment about the mean replaces
X
_{ i } with
X
_{ i } −
\( {{\overline{X}}_i} \),
Y
_{ i } with
Y
_{ i } −
\( {{\overline{Y}}_i} \), and
\( {{{{M^{\prime}}}_{{rs}}}{\text { with }} {{M}_{{r,s}}}} \).
See Kendall and Stuart (1977),
Advanced Theory, Vol. 1, pp. 246251, for an approach based on Taylor series expansions that can be used to approximate moments of the sample correlation.
The theorem can be extended to the case where
X ~
N (
μ
_{ x }
,Σ), in which case the condition for independence is that
BΣA =
0.
This interval and associated probability was obtained by noting that if
Y ~
\( \chi_{{199}}^2 \), then
P(161.83 ≤
y ≤ 239.96) = .95, which was obtained via numerical integration of the
\( {\mathop{\chi}\nolimits_{{199}}^2 } \) density, leaving.025 probability in both the right and left tails of the density. Using the relationship
S
^{2} ~ (.015625/200)
Y then leads to the stated interval.
Although, as we have mentioned previously, the density function can always be identified in principle by an integration problem involving the MGF in the integrand.
Here, and elsewhere, we are suppressing a technical requirement that
f be continuous at the point
g
^{−1}(
b), so that we can invoke Lemma 6.1 for differentiation of the cumulative distribution function. Even if
f is discontinuous at
g
^{−1}(
b), we can nonetheless
define h(
b) as indicated above, since a density function can be redefined arbitrarily at a finite number of points of discontinuity without affecting the assignment of probabilities to any events.
These properties define a function that is
piecewise invertible on the domain
\( \cup_{{i = 1}}^n{{D}_i} \).
Note that
I(
y) is an index set containing the indices of all of the
D
_{ i } sets that have an element whose image under the function g is the value
y.
If max or min do not exist, they are replaced with sup and inf.
We are continuing to use the convention that
y
^{1/2} refers to the positive square root of
y, so that −
y
^{1/2} refers to the negative square root.
 Title
 Sampling, Sample Moments and Sampling Distributions
 DOI
 https://doi.org/10.1007/9781461450221_6
 Author:

Ron C. Mittelhammer
 Publisher
 Springer New York
 Sequence number
 6
 Chapter number
 6