Skip to main content
main-content

Über dieses Buch

In linear regression the ordinary least squares estimator plays a central role and sometimes one may get the impression that it is the only reasonable and applicable estimator available. Nonetheless, there exists a variety of alterna­ tives, proving useful in specific situations. Purpose and Scope. This book aims at presenting a comprehensive survey of different point estimation methods in linear regression, along with the the­ oretical background on a advanced courses level. Besides its possible use as a companion for specific courses, it should be helpful for purposes of further reading, giving detailed explanations on many topics in this field. Numerical examples and graphics will aid to deepen the insight into the specifics of the presented methods. For the purpose of self-containment, the basic theory of linear regression models and least squares is presented. The fundamentals of decision theory and matrix algebra are also included. Some prior basic knowledge, however, appears to be necessary for easy reading and understanding.

Inhaltsverzeichnis

Frontmatter

Point Estimation and Linear Regression

Frontmatter

1. Fundamentals

Abstract
In this chapter, a brief introduction into the theory of linear regression models is given and a small numerical example is created, providing the opportunity to pose some of the central problems. In addition, because of the relevance for comparison of point estimators, an introduction into the basics of decision theory is delivered.
Jürgen Groß

2. The Linear Regression Model

Abstract
In this chapter, we consider point estimation of the parameters ß ∈ ℝ P and σ2 ∈ (0, ∞) in the linear regression model
$$y = X\beta + \varepsilon , \varepsilon \sim (0,{{\sigma }^{2}}{{I}_{n}}) $$
We will focus our attention to the ordinary least squares estimator
$$ \hat \beta = (X'X)^{ - 1} X'y $$
and the least squares variance estimator
$$ \hat \sigma ^2 = \frac{1} {{n - p}}(y - X\hat \beta )'(yy - X\hat \beta ) $$
both estimators being unbiased for ß and σ2, respectively.
Jürgen Groß

Alternatives to Least Squares Estimation

Frontmatter

3. Alternative Estimators

Abstract
In the previous chapter we have considered the non-linear Stein estimator as an example for an alternative to the ordinary least squares estimator which can have specific advantages. Now, we present further alternative estimators whose use can be shown to benefit certain situations. In general, we proceed by presenting an alternative as a linear estimator and comparing its properties with those of the ordinary least squares estimator. Then we demonstrate the use of one or more non-linear variants of the estimator in question in practice.
Jürgen Groß

4. Linear Admissibility

Abstract
Chapters 2 and 3 deal with alternatives to the ordinary least squares estimator \( \hat \beta \) for ß. Any of these can be viewed as being based on some linear estimator for ß , supposed to be admissible within the set of all linear estimators. This guarantees that the considered alternative linear estimator is better than the ordinary least squares estimator for at least one possible value of the unknown ß. The actual chapter investigates the structure of linearly admissible estimators.
Jürgen Groß

Miscellaneous Topics

Frontmatter

5. The Covariance Matrix of the Error Vector

Abstract
Assumption (iv) of the linear regression model claims the covariance matrix of the error vector ɛ to be Cov(ɛ) = σ2In with an unknown parameter σ2 ∈ (0, ∞). This chapter discusses the estimation of σ2 in detail, and introduces situations under which it appears to be reasonable to extend assumption (iv) to Cov(ε) = σ2V for some symmetric positive/nonnegative definite matrix VI n
Jürgen Groß

6. Regression Diagnostics

Abstract
When linear regression methods are applied to given data, useful results can be expected when the chosen model is considerably plausible, meaning that no substantial indications for inconsistencies and violation of model assumptions can be found. This concerns e.g. the choice and number of incorporated variables, the goodness of fit, the degree of collinearity, the distribution of the errors (normality, homogeneity, uncorrelatedness), the linearity of the assumed relationship, and the influence of individual observations.
Jürgen Groß

A. Matrix Algebra

Abstract
This chapter presents basic and more far-reaching definitions and theorems in matrix algebra, being useful for the understanding of the results in this book. Proofs are omitted here. More comprehensive treatments of this topic with an emphasize on a statistical background are given in [50, 88, 91]. In addition, [59, 78, 133] provide beneficial presentations of the theory on matrices. A collection of useful results from different areas of matrix algebra is given in [73].
Jürgen Groß

B. Stochastic Vectors

Abstract
Results for vectors and matrices whose elements are random variables are given.
Jürgen Groß

C. An Example Analysis with R

Abstract
In this chapter we describe a possible analysis of a linear regression model with the statistical-computing environment R, based on the S language. R is available through the Internet under the General Public License (GPL) and can be downloaded from one of the CRAN (Comprehensive R Archive Network) sites. See the main site http://​cran.​r-project.​org/​
Jürgen Groß

Backmatter

Weitere Informationen