scroll identifier for mobile
main-content

## Über dieses Buch

This reference manual for the OpenStat software, an open-source software developed by William Miller, covers a broad spectrum of statistical methods and techniques. A unique feature is its compatibility with many other statistical programs. OpenStat users are researchers and students in the social sciences, education, or psychology, who benefit from the hands on approach to Statistics. During and upon completion of courses in Statistics or measurement, students and future researchers need a low cost computer program available to them, and OpenStat fills this void. The software is used in Statistics courses around the world with over 50,000 downloads per year. The manual covers all functions of the OpenStat software, including measurement, ANOVAS, regression analyses, simulations, product-moment and partial correlations, and logistic regression. The manual is an important learning tool that explains the Statistics behind the many analyses possible with the program and demonstrates these analyses.

## Inhaltsverzeichnis

### Chapter 1. Basic Statistics

Abstract
This chapter introduces the basic statistics concepts you will need throughout your use of the OpenStat package. You will be introduced to the symbols and formulas used to represent a number of concepts utilized in statistical inference, research design, measurement theory, multivariate analyses, etc. Like many people first starting to learn statistics, you may be easily overwhelmed by the symbols and formulas—don’t worry, that is pretty natural and does NOT mean you are retarded! You may need to re-read sections several times however before a concept is grasped. You will not be able to read statistics like a novel (don’t we wish we could) but rather must “study” a few lines at a time and be sure of your understanding before you proceed.
William Miller

### Chapter 2. Descriptive Statistics

Abstract
The mean is probably the most often used parameter or statistic used to describe the central tendency of a population or sample. When we are discussing a population of scores, the mean of the population is denoted with the Greek letter μ. When we are discussing the mean of a sample, we utilize the letter X with a bar above it. The sample mean is obtained as
$$\overline{X}=\frac{{\sum\limits_{i=1}^n {{X_i}} }}{n}$$
(2.1)
William Miller

### Chapter 3. The Product Moment Correlation

Abstract
It seems most living creatures observe relationships, perhaps as a survival instinct. We observe signs that the weather is changing and prepare ourselves for the winter season. We observe that when seat belts are worn in cars that the number of fatalities in car accidents decrease. We observe that students that do well in one subject tend to perform will in other subjects. This chapter explores the linear relationship between observed phenomena.
William Miller

### Chapter 4. Multiple Regression

Abstract
One of the major applications in statistics is the prediction of one or more characteristics of individuals on the basis of knowledge about related characteristics. For example, common-sense observation has taught most of us that the amount of time we practice learning something is somewhat predictive of how well we perform on that thing we are trying to master. Our bowling score tends to improve (up to a point) in relationship to the amount of time we spend practicing bowling. In the social sciences however, we are often interested in predicting less obvious outcomes. For example, we may be interested in predicting how much a person might be expected to use a computer on the basis of a possible relationship between computer usage and other characteristics such as anxiety in using machines, mathematics aptitude, spatial visualization skills, etc. Often we have not even observed the relationships but instead must simply hypothesize that a relationship exists. In addition, we must hypothesize or assume the type of relationship between our variables of interest. Is the relationship a linear one? Is it a curvilinear one?
William Miller

### Chapter 5. Analysis of Variance

Abstract
While the “Student” t-test provides a powerful method for comparing sample means for testing differences between population means, when more than two groups are to be compared, the probability of finding at least one comparison significant by chance sampling error becomes greater than the alpha level (rate of Type I error) set by the investigator. Another method, the method of Analysis of Variance, provides a means of testing differences among more than two groups yet retain the overall probability level of alpha selected by the researcher. Your OpenStat4 package contains a variety of analysis of variance procedures to handle various research designs encountered in evaluation research. These various research designs require different assumptions by the researcher in order for the statistical tests to be justified. Fundamental to nearly all research designs is the assumption that random sampling errors produce normally distributed score distributions and that experimental effects result in changes to the mean, not the variance or shape of score distributions. A second common assumption to most designs using ANOVA is that the sub-populations sampled have equal score variances – this is the assumption of homogeneity of variance. A third common assumption is that the populations sampled have been randomly sampled and are very large (infinite) in size. A fourth assumption of some research designs where individual subjects or units of observation are repeatedly measured is that the correlation among these repeated measures is the same for populations sampled – this is called the assumption of homogeneity of covariance.
William Miller

### Chapter 6. Non-Parametric Statistics

Abstract
Beginning statistics students are usually introduced to what are called “parametric” statistics methods. Those methods utilize “models” of score distributions such as the normal (Gaussian) distribution, Poisson distribution, binomial distribution, etc. The emphasis in parametric statistical methods is estimating population parameters from sample statistics when the distribution of the population scores can be assumed to be one of these theoretical models. The observations made are also assumed to be based on continous variables that utilize an interval or ratio scale of measurement. Frequently the measurement scales available yield only nominal or ordinal values and nothing can be assumed about the distribution of such values in the population sampled. If however, random sampling has been utilized in selecting subjects, one can still make inferences about relationships and differences similar to those made with parametric statistics. For example, if students enrolled in two courses are assigned a rank on their achievement in each of the two courses, it is reasonable to expect that students that rank high in one course would tend to rank high in the other course. Since a rank only indicates order however and not “how much” was achieved, we cannot use the usual product–moment correlation to indicate the relationship between the ranks. We can estimate, however, what the product of rank values in a group of n subjects where the ranks are randomly assigned would tend to be and estimate the variability of these sums or rank products for repeated samples. This would lead to a test of significance of the departure of our rank product sum (or average) from a value expected when there is no relationship.
William Miller

### Chapter 7. Statistical Process Control

Abstract
Statistical Process Control (SPC) has become a major factor in the reduction of manufacturing process errors over the past years. Sometimes known as the Demming methods for the person that introduced them to Japan and then the United States, they have become necessary tools in quality control processes. Since many of the employees in the manufacturing area have limited background in statistics, a large dependency has been built on the creation of charts and their interpretation. The statistics which underlay these charts are often those we have introduced in previous sections. The unique aspect of SPC is in the presentation of data in the charts themselves.
William Miller

### Chapter 8. Linear Programming

Abstract
Linear programming is a subset of a larger area of application called mathematical programming. The purpose of this area is to provide a means by which a person may find an optimal solution for a problem involving objects or processes with fixed ‘costs’ (e.g. money, time, resources) and one or more ‘constraints’ imposed on the objects. As an example, consider the situation where a manufacturer wishes to produce 100 lb of an alloy which is 83% lead, 14% iron and 3% antimony. Assume he has at his disposal, five existing alloys with the following characteristics:
William Miller

### Chapter 9. Measurement

Abstract
Evaluators base their evaluations on information. This information comes from a number of sources such as financial records, production cost estimates, sales records, state legal code books, etc. Frequently the evaluator must collect additional data using instruments that he or she alone has developed or acquired from external sources. This is often the case for the evaluation of training and educational programs, evaluation of personnel policies and their impacts, evaluation of social and psychological environments of the workplace, and the evaluation of proposed changes in the way people do business or work.
William Miller

### Backmatter

Weitere Informationen