Skip to main content
main-content

Über dieses Buch

Like most academic authors, my views are a joint product of my teaching and my research. Needless to say, my views reflect the biases that I have acquired. One way to articulate the rationale (and limitations) of my biases is through the preface of a truly great text of a previous era, Cooley and Lohnes (1971, p. v). They draw a distinction between mathematical statisticians whose intel­ lect gave birth to the field of multivariate analysis, such as Hotelling, Bartlett, and Wilks, and those who chose to "concentrate much of their attention on methods of analyzing data in the sciences and of interpreting the results of statistical analysis . . . . (and) . . . who are more interested in the sciences than in mathematics, among other characteristics. " I find the distinction between individuals who are temperamentally "mathe­ maticians" (whom philosophy students might call "Platonists") and "scientists" ("Aristotelians") useful as long as it is not pushed to the point where one assumes "mathematicians" completely disdain data and "scientists" are never interested in contributing to the mathematical foundations of their discipline. I certainly feel more comfortable attempting to contribute in the "scientist" rather than the "mathematician" role. As a consequence, this book is primarily written for individuals concerned with data analysis. However, as noted in Chapter 1, true expertise demands familiarity with both traditions.

Inhaltsverzeichnis

Frontmatter

1. Introduction and Previe

Without Abstract
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

2. Some Basic Statistical Concepts

Without Abstract
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

3. Some Matrix Concepts

Without Abstract
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

4. Multiple Regression and Correlation—Part 1. Basic Concepts

Without Abstract
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

5. Multiple Regression and Correlation—Part 2. Advanced Applications

Abstract
Chapter 5 addresses advanced applications of the basic multiple regression model. Most of this chapter is devoted to the use of multiple regression in performing the analysis of variance (ANOVA). Be prepared; I will introduce the topic from a very different approach than you have probably been taught. The way it is taught here and in many other multivariate texts uses multiple regression, not the approach generally called “partitioning of sums of squares.” The results are identical to the way it is taught in basic statistics and experimental design courses, but use of multiple regression greatly simplifies many computations, especially those arising when the number of subjects in the various groups differs (the unequal-N case). One reason the ANOVA is not taught as a special case of multiple regression in undergraduate statistics courses is that it would be too difficult to cover basic material on multiple regression like that in Chapter 4 first. Teaching the ANOVA as a special case of multiple regression has the advantage of illustrating the way computer packages operate, since the packages do not use the methods you were taught as an undergraduate.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

6. Exploratory Factor Analysis

Abstract
Factor analysis may be viewed as a set of models for transforming a group of variables into a simpler and more useful form. In essence, linear combinations are formed from variables, and the resulting linear combinations are used to “predict” the original variables. The reason for using this apparently circular process is that a small number of linear combinations, which define a new set of variables, may be able to describe all or nearly all of the meaning of the larger set of original variables.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

7. Confirmatory Factor Analysis

Abstract
As a general topic, confirmatory factor analysis refers to the examination of properties of linear combinations that have been defined in advance. This chapter surveys several situations in which such a priori linear combinations are likely to be found. Specifically, six major topics are discussed.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

8. Classification Methods—Part 1. Forming Discriminant Axes

Abstract
Chapter 8 is the first of three chapters devoted to classification. The material covered in this chapter deals largely with the process of combining two or more predictors to best discriminate among two or more groups—discriminant analysis. This chapter is divided into three main sections.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

9. Classification Methods—Part 2. Methods of Assignment

Abstract
The part of classification that I am now concerned with is how to classify an individual observation. In order to deal with some logical problems, I will first present Bayesian considerations about the values and base rates associated with a decision. The topic leads into signal detection theory, an area of decision making that ties together communication theory, statistics, econometrics, and psychophysics.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

10. Classification Methods—Part 3. Inferential Considerations in the MANOVA

Abstract
I am going to conclude the general topic of classification and discrimination with a consideration of null hypothesis testing. Much of this chapter deals with the multivariate analysis of variance (MANOVA) and related themes. I have mentioned earlier at several points of the text that testing a multivariate hypothesis of centroid (vector, profile) differences is more complex than testing a univariate hypothesis of mean (location) difference. The basic point to remember is that an inferential test that is the most powerful for detecting a difference when centroids are concentrated is not necessarily the most powerful test for detecting a difference when centroids were diffuse. The general strategy is to treat all unknown differences in structure as if they are diffuse.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

11. Profile and Canonical Analysis

Abstract
This chapter is largely concerned with questions related to how similar two vectors (profiles) are. The issue of vector similarity appears directly in many classification problems, as was seen in Chapter 9, in particular. Vector similarity leads into the process of forming groups based on the similarity of their vectors, clustering. However, it also appears indirectly whenever one wishes to make a linear combination obtained from one set of variables as similar as possible to a linear combination obtained from a second set of variables, which is the topic of canonical analysis.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

12. Analysis of Scales

Abstract
Chapter 12 is concerned with the statistical properties of multiitem scales. A distinguishing characteristic of relevant analyses is that items are typically quite unreliable or “noisy” in the sense of containing considerable measurement error. The situation is therefore quite different from what was considered in the discussion of multiple regression—an important but contrasting model which assumed that the predictors were reliable. As was noted in the earlier discussion, small amounts of measurement error would not seriously impair the use of the regression model, but could produce spurious results when the predictors were highly unreliable.
Ira H. Bernstein, Calvin P. Garbin, Gary K. Teng

Backmatter

Weitere Informationen