Skip to main content
Top

1990 | Book

Regression Analysis

Theory, Methods, and Applications

Authors: Ashish Sen, Muni Srivastava

Publisher: Springer New York

Book Series : Springer Texts in Statistics

insite
SEARCH

About this book

Any method of fitting equations to data may be called regression. Such equations are valuable for at least two purposes: making predictions and judging the strength of relationships. Because they provide a way of em­ pirically identifying how a variable is affected by other variables, regression methods have become essential in a wide range of fields, including the social sciences, engineering, medical research and business. Of the various methods of performing regression, least squares is the most widely used. In fact, linear least squares regression is by far the most widely used of any statistical technique. Although nonlinear least squares is covered in an appendix, this book is mainly about linear least squares applied to fit a single equation (as opposed to a system of equations). The writing of this book started in 1982. Since then, various drafts have been used at the University of Toronto for teaching a semester-long course to juniors, seniors and graduate students in a number of fields, including statistics, pharmacology, engineering, economics, forestry and the behav­ ioral sciences. Parts of the book have also been used in a quarter-long course given to Master's and Ph.D. students in public administration, urban plan­ ning and engineering at the University of Illinois at Chicago (UIC). This experience and the comments and criticisms from students helped forge the final version.

Table of Contents

Frontmatter
Chapter 1. Introduction
Abstract
Perception of relationships is the cornerstone of civilization. By understanding how certain phenomena depend on others we learn to predict the consequences of our actions and to manipulate our environment. Most relationships we know of are based on empirical observations. Although some relationships are postulated on theoretical grounds, usually the theories themselves were originally obtained from empirical observations. And even these relationships often need to be empirically tested.
Ashish Sen, Muni Srivastava
Chapter 2. Multiple Regression
Abstract
Formulae for multiple regression are much more compact in matrix notation. Therefore, we shall start off in the next section applying such notation first to simple regression, which we considered in Chapter 1, and then to multiple regression. Alter that. we shall derive formulae for least. squares estimates and present properties of these estimates. These properties will he derived under the Gauss-ldarkov conditions which were presented in Chapter 1 and are essentially restated in Section 2.5.
Ashish Sen, Muni Srivastava
Chapter 3. Tests and Confidence Regions
Abstract
Consider the example of house prices that we saw in the last chapter. It is reasonable to ask one or more of the following questions:
(a)
Is the selling price affected by the number of rooms in a house, given that the other independent variables (e.g., floor area, lot size) remain the same?
 
(b)
Suppose the realtor says that adding a garage will add $5000 to the selling price of the house. Can this be true?
 
(c)
Do lot size and floor area affect the price equally?
 
(d)
Does either lot size or floor area have any effect on prices?
 
(e)
Can it be true that storm windows add $6000 and a garage adds $4000 to the price of a house?
 
For some data sets we may even be interested in seeing whether all the independent variables together have any effect on the response variable. In the first part of this chapter we consider such testing problems. In the second part of the chapter we shall study the related issue of determining confidence intervals and regions. In the last section we return to testing.
Ashish Sen, Muni Srivastava
Chapter 4. Indicator Variables
Abstract
Indicator or dummy variables are variables that take only two values —0 and 1. Normally 1 represents the presence of some attribute and 0 its absence. We have already encountered such variables in Example 2.2, p. 31. They have a wide range of uses which will be discussed in this chapter. We shall mainly be concerned with using them as independent variables. In general, their use as dependent variables in least squares analysis is not recommended. Nonetheless, we shall consider the topic further in the final section of this chapter.
Ashish Sen, Muni Srivastava
Chapter 5. The Normality Assumption
Abstract
In much of the work presented in the last four chapters we have assumed that the Gauss-Markov conditions were true, We have also sometimes made the additional assumption that the errors and, therefore, the dependent variables were normally distributed. In practice, these assumptions do not always hold; in fact, quite often, at least one of them will be violated. In this and the next four chapters we shall examine how to check whether each of the assumptions actually holds and what, if anything, we can do if it does not. This chapter is devoted to the normality assumption.
Ashish Sen, Muni Srivastava
Chapter 6. Unequal Variances
Abstract
One of the great values of the Gauss-Markov theorem is that it provides conditions which, if they hold, assure us that least squares is a good procedure. These conditions can be checked and if we find that one or more of them are seriously violated, we can take action that will cause at least approximate compliance. This and the next few chapters will deal with various ways in which these G-M conditions can be violated and what we would then need to do.
Ashish Sen, Muni Srivastava
Chapter 7. *Correlated Errors
Abstract
Continuing with our examination of violations of Gauss-Markov conditions, in this chapter we examine the case where
$$E(\in \in ') = {\sigma ^2}\Omega$$
(7.1)
could be non-diagonal; i.e., some E(∈j∈j)’s may be non-zero even when ij. Cases of this kind do occur with some frequency. For example, observations of the same phenomena (e.g., per capita income) taken over time are often correlated (serial correlation), observations (e.g., of median rent) from points or zones in space that are close together are often more alike than observations taken from points further apart (spatial correlation), and observations from the same production run or using the same laboratory equipment often have more semblance than those from distinct runs.
Ashish Sen, Muni Srivastava
Chapter 8. Outliers and Influential Observations
Abstract
Sometimes, while most of the observations fit the model and meet G-M conditions at least approximately, some of the observations do not. This occurs when there is something wrong with the observations or if the model is faulty.
Ashish Sen, Muni Srivastava
Chapter 9. Transformations
Abstract
Until now we have considered situations where the algebraic form of the model, i.e.,
$${\beta _0} + {\beta _1}{x_{i1}} + \ldots + {\beta _k}{x_{ik}},$$
was approximately correct. Obviously, this will not always be so. The actual relationship may not be a linear function of the \({x_{ij}}\)’s and sometimes not even of the β j ’s. In some such cases we may still be able to do linear regression by transforming (i.e., using functions of) the independent and/or the dependent variables.
Ashish Sen, Muni Srivastava
Chapter 10. Multicollinearity
Abstract
Until this point, the only difficulties with least squares estimation that we have considered have been associated with violations of Gauss-Markov conditions, These conditions only assure us that least squares estimates will be ‘best’ for a given set of independent variables; i.e., for a given X matrix. Unfortunately, the quality of estimates, as measured by their variances, can be seriously and adversely affected if the independent variables are closely related to each other. This situation, which (with a slight abuse of language) is called multicollinearity, is the subject of this chapter and is also the underlying factor that motivates the methods treated in Chapters 11 and 12.
Ashish Sen, Muni Srivastava
Chapter 11. Variable Selection
Abstract
Frequently we start out with a fairly long list of independent variables that we suspect have some effect on the dependent variable, but for various reasons we would like to cull the list. One important reason is the resultant parsimony: It is easier to work with simpler models. Another is that reducing the number of variables often reduces multicollinearity. Still another reason is that it lowers the ratio of the number of variables to the number of observations, which is beneficial in many ways.
Ashish Sen, Muni Srivastava
Chapter 12. *Biased Estimation
Abstract
One purpose of variable selection is to reduce multicollinearity, although, as we noted in Section 11.2, reducing the number of independent variables can lead to bias. Obviously, the general principle is that it might be preferable to trade off a small amount of bias in order to substantially reduce the variances of the estimates of β. There are several other methods of estimation which are also based on trading off bias for variance. This chapter describes three of these: principal component regression, ridge regression and the shrinkage estimator.
Ashish Sen, Muni Srivastava
Backmatter
Metadata
Title
Regression Analysis
Authors
Ashish Sen
Muni Srivastava
Copyright Year
1990
Publisher
Springer New York
Electronic ISBN
978-1-4612-4470-7
Print ISBN
978-1-4612-8789-6
DOI
https://doi.org/10.1007/978-1-4612-4470-7