Skip to main content

2004 | Buch

Fuzzy Statistics

verfasst von: Prof. James J. Buckley

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Fuzziness and Soft Computing

insite
SUCHEN

Über dieses Buch

1. 1 Introduction This book is written in four major divisions. The first part is the introductory chapters consisting of Chapters 1 and 2. In part two, Chapters 3-11, we develop fuzzy estimation. For example, in Chapter 3 we construct a fuzzy estimator for the mean of a normal distribution assuming the variance is known. More details on fuzzy estimation are in Chapter 3 and then after Chapter 3, Chapters 4-11 can be read independently. Part three, Chapters 12- 20, are on fuzzy hypothesis testing. For example, in Chapter 12 we consider the test Ho : /1 = /10 verses HI : /1 f=- /10 where /1 is the mean of a normal distribution with known variance, but we use a fuzzy number (from Chapter 3) estimator of /1 in the test statistic. More details on fuzzy hypothesis testing are in Chapter 12 and then after Chapter 12 Chapters 13-20 may be read independently. Part four, Chapters 21-27, are on fuzzy regression and fuzzy prediction. We start with fuzzy correlation in Chapter 21. Simple linear regression is the topic in Chapters 22-24 and Chapters 25-27 concentrate on multiple linear regression. Part two (fuzzy estimation) is used in Chapters 22 and 25; and part 3 (fuzzy hypothesis testing) is employed in Chapters 24 and 27. Fuzzy prediction is contained in Chapters 23 and 26. A most important part of our models in fuzzy statistics is that we always start with a random sample producing crisp (non-fuzzy) data.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
This book is written in four major divisions. The first part is the introductory chapters consisting of Chapters 1 and 2. In part two, Chapters 3–11, we develop fuzzy estimation. For example, in Chapter 3 we construct a fuzzy estimator for the mean of a normal distribution assuming the variance is known. More details on fuzzy estimation are in Chapter 3 and then after Chapter 3, Chapters 4–11 can be read independently. Part three, Chapters 12–20, are on fuzzy hypothesis testing. For example, in Chapter 12 we consider the test 0: μ = μ 0 verses H 1: μμ 0 where μ is the mean of a normal distribution with known variance, but we use a fuzzy number (from Chapter 3) estimator of μ in the test statistic. More details on fuzzy hypothesis testing are in Chapter 12 and then after Chapter 12 Chapters 13–20 may be read independently. Part four, Chapters 21–27, are on fuzzy regression and fuzzy prediction. We start with fuzzy correlation in Chapter 21. Simple linear regression is the topic in Chapters 22–24 and Chapters 25–27 concentrate on multiple linear regression. Part two (fuzzy estimation) is used in Chapters 22 and 25; and part 3 (fuzzy hypothesis testing) is employed in Chapters 24 and 27. Fuzzy prediction is contained in Chapters 23 and 26.
James J. Buckley
Chapter 2. Fuzzy Sets
Abstract
In this chapter we have collected together the basic ideas from fuzzy sets and fuzzy functions needed for the book. Any reader familiar with fuzzy sets, fuzzy numbers, the extension principle, α-cuts, interval arithmetic, and fuzzy functions may go on and have a look at Section 2.5. In Section 2.5 we discuss the method we will be using in this book to evaluate comparisons between fuzzy numbers. That is, in Section 2.5 we need to decide which one of the following three possibilities is true: \(\overline M < \overline N ;\overline M \approx \overline N ;or\overline M > \overline N \),for fuzzy numbers \(\overline M \) and \(\overline N \). Section 2.5 will be employed in fuzzy hypothesis testing. A good general reference for fuzzy sets and fuzzy logic is [4] and [8].
James J. Buckley
Chapter 3. Estimate μ, Variance Known
Abstract
This starts a series of chapters, Chapters 3–11, on fuzzy estimation. In this chapter we first present some general information on fuzzy estimation and then concentrate on the mean of a normal probability distribution assuming the variance is known. The rest of the chapters on fuzzy estimation can be read independently.
James J. Buckley
Chapter 4. Estimate μ, Variance Unknown
Abstract
Consider X a random variable with probability density function N(μ, σ 2), which is the normal probability density with unknown mean μ and unknown variance σ 2. To estimate μ we obtain a random sample X 1, ..., X n from N(μ, σ 2). Suppose the mean of this random sample turns out to be
$$\mathop s\nolimits^2 = {\sum\limits_{i = 1}^n {\left( {\mathop x\nolimits_i - \overline x } \right)} ^2}/\left( {n - 1} \right) $$
(4.2)
, which is a crisp number, not a fuzzy number. Also, let s 2 be the sample variance. Our point estimator of μ is \(\overline{x}\). If the values of the random sample are x1, ..., x n then the expression we will use for s 2 in this book is
$${{s}^{2}}={{\sum\limits_{i=1}^{n}{({{x}_{i}}-\overline{x})}}^{2}}/(n-1).$$
(4.1)
We will use this form of s 2, with denominator (n−1), so that it is an unbiased estimator of σ 2.
James J. Buckley
Chapter 5. Estimate p, Binomial Population
Abstract
We have an experiment in mind in which we are interested in only two possible outcomes labeled “success” and “failure”. Let p be the probability of a success so that q = 1−p will be the probability of a failure. We want to estimate the value of p. We therefore gather a random sample which here is running the experiment n independent times and counting the number of times we had a success. Let x be the number of times we observed a success in n independent repetitions of this experiment. Then our point estimate of \(\mathop p\limits^ \wedge = x/n\).
James J. Buckley
Chapter 6. Estimate σ2 from a Normal Population
Abstract
We first construct a fuzzy estimator for σ2 using the usual confidence intervals for the variance from a normal distribution and we show this fuzzy estimator is biased. Then in Section 6.3 we construct an unbiased fuzzy estimator for the variance.
James J. Buckley
Chapter 7. Estimate µ 1 — µ 2, Variances Known
Abstract
We have two populations: Pop I and Pop II. Pop I is normally distributed with unknown mean µ 1 and known variance σ 1 2 .Pop II is also normally distributed with unknown mean µ 2 but known variance σ 2 2 . We wish to construct a fuzzy estimator for µ 1µ 2.
James J. Buckley
Chapter 8. Estimate μ 1 — μ 2, Variances Unknown
Abstract
This continues Chapter 7 but now the population variances are unknown. We will use the notation of Chapter 7. There are three cases to look at and we will discuss each of these below.
James J. Buckley
Chapter 9. Estimate d =μ 1 — μ 2, Matched Pairs
Abstract
Let x 1,..., x n be the values of a random sample from a population Pop I. Let object i , or person i , belong to Pop I which produced measurement x i in the random sample, 1 ≤ in. Then, at possibly some later time, we take a second measurement on object i (person i ) and get value y i, 1 ≤ in. Then (x 1, y 1), ..., (x n , y n ) are n pairs of dependent measurements. For example, when testing the effectiveness of some treatment for high blood pressure, the x i are the blood pressure measurements before treatment and the y i are these measurements on the same person after treatment. The two samples are not independent so we can not use the results of Chapters 7 or 8.
James J. Buckley
Chapter 10. Estimate p 1 — p 2, Binomial Populations
Abstract
In this chapter we have two binomial population Pop I and Pop II. In Pop I (II) let p 1 (p 2) be the probability of a “success”. We want a fuzzy estimator for p 1p 2.
James J. Buckley
Chapter 11. Estimate σ 1 2 /σ 2 2 , Normal Populations
Abstract
We first discuss the non-fuzzy confidence intervals for the ratio of the variances. We then construct a fuzzy estimator for σ 1 2 2 2 from these crisp confidence intervals and show it is biased (see Chapter 6). Then we find an unbiased fuzzy estimator.
James J. Buckley
Chapter 12. Tests on µ, Variance Known
Abstract
This chapter starts a series of chapters (Chapters 12–20) on fuzzy hypothesis testing. In each chapter we first review the crisp case first before proceeding to the fuzzy situation. We give more details on fuzzy hypothesis testing in this chapter.
James J. Buckley
Chapter 13. Tests on µ, Variance Unknown
Abstract
We start with a review of the crisp case and then present the fuzzy model.
James J. Buckley
Chapter 14. Tests on p for a Binomial Population
Abstract
We begin with the non-fuzzy test and then proceed to the fuzzy test.
James J. Buckley
Chapter 15. Tests on σ 2, Normal Population
Abstract
We first describe the crisp hypothesis test and then construct the fuzzy test.
James J. Buckley
Chapter 16. Tests μ 1 verses μ 2, Variances Known
Abstract
Let us first review the details on the crisp test.
James J. Buckley
Chapter 17. Test μ 1 verses μ 2, Variances Unknown
Abstract
We have two populations: Pop I and Pop II. Pop I is normally distributed with unknown mean μ 1 and unknown variance σ 1 2 . Pop II is also normally distributed with unknown mean μ 2 and unknown variance σ 2 2 . We wish to do the following statistical test
$$\mathop H\nolimits_0 :\mathop \mu \nolimits_1 - \mathop \mu \nolimits_1 = 0$$
(17.1)
verses
$$\mathop H\nolimits_0 :\mathop \mu \nolimits_1 - \mathop \mu \nolimits_1 \ne 0$$
(17.2)
We collect a random sample of size n 1 from Pop I and let \({{\bar{x}}_{1}}\) be the mean for this data and s 1 2 is the sample variance. We also gather a random sample of size n 2 from Pop II and \({{\bar{x}}_{2}}\) is the mean for the second sample with s 2 2 the variance. We assume these two random samples are independent.
James J. Buckley
Chapter 18. Test p 1 = p 2, Binomial Populations
Abstract
In this chapter we have two binomial populations: Pop I and Pop II. In Pop I (II) let p 1 (p 2) be the probability of a “success”. We want a do the following test
$$\mathop H\nolimits_0 :\mathop p\nolimits_1 - \mathop p\nolimits_2 = 0$$
(18.1)
verses
$$\mathop H\nolimits_1 :\mathop p\nolimits_1 - \mathop p\nolimits_2 \ne 0$$
(18.2)
James J. Buckley
Chapter 19. Test d = µ 1 — µ 2 , Matched Pairs
Abstract
We begin following Chapter 9. Let x 1, ..., x n be the values of a random sample from a population Pop I. Let object i , or person i ,belong to Pop I which produced measurement x i in the random sample, 1 ≤ in. Then, at possibly some later time, we take a second measurement on object i , (person i ) and get value y i , 1 ≤ in. Then (x 1,y 1),...,(x n,y n) are n pairs of dependent measurements. For example, when testing the effectiveness of some treatment for high blood pressure, the x i are the blood pressure measurements before treatment and the y i are these measurements on the same person after treatment. The two samples are not independent.
James J. Buckley
Chapter 20. Test σ 1 2 verses σ 2 2 , Normal Populations
Abstract
We have two populations: Pop I and Pop II. Pop I is normally distributed with unknown mean μ l and unknown variance σ 1 2 . Pop II is also normally distributed with unknown mean μ 2 and unknown variance σ 2 2 . We wish to do the following statistical test
$$\mathop H\nolimits_0 :\mathop \sigma \nolimits_1^2 = \mathop \sigma \nolimits_2^2 $$
(20.1)
verses
$$\mathop H\nolimits_1 :\mathop \sigma \nolimits_1^2 \ne \mathop \sigma \nolimits_2^2 $$
(20.2)
We collect a random sample of size n 1 from Pop I and let \(\overline {\mathop x\nolimits_1 } \) be the mean for this data and s 1 2 is the sample variance. We also gather a random sample of size \(\overline {\mathop x\nolimits_2 } \) is the mean for the second sample with s 2 2 the variance. We assume these two random samples are independent.
James J. Buckley
Chapter 21. Fuzzy Correlation
Abstract
We first present the crisp theory and then the fuzzy results. We have taken the crisp theory from Section 8.8 of [1].
James J. Buckley
Chapter 22. Estimation in Simple Linear Regression
Abstract
Let us first review the basic theory on crisp simple linear regression. Our development, throughout this chapter and the next two chapters, follows Sections 7.8 and 7.9 in [1]. We have some data (x i , y i), 1 ≤ in, on two variables x and Y. Notice that we start with crisp data and not fuzzy data. Most papers on fuzzy regression assume fuzzy data. The values of x are known in advance and Y is a random variable. We assume that there is no uncertainty in the x data. We can not predict a future value of Y with certainty so we decide to focus on the mean of Y, E(Y). We assume that E(Y) is a linear function of x, say \(E\left( Y \right) = a + b\left( {x - \overline x } \right)\) Here \(\overline x \) is the mean of the x-values and not a fuzzy set. Our model is
$$\mathop Y\nolimits_i = a + b\left( {\mathop x\nolimits_i - \overline x } \right) + \mathop \in \nolimits_i $$
(22.1)
where i are independent and N(0, σ 2) with σ 2 unknown. The basic regression equation for the mean of Y is \(y = a + b\left( {x - \overline x } \right)\) and now we wish to estimate the values of a and b. Notice our basic regression line is not y = a + bx,and the expression for the estimator of a will differ between the two models.
James J. Buckley
Chapter 23. Fuzzy Prediction in Linear Regression
Abstract
From the previous chapter we have our fuzzy regression
$$\overline y \left( x \right) = \overline a + \overline b \left( {x - \overline x } \right) $$
(23.1)
equation for \(\overline y \left( x \right)\), with \(\overline a \) and \(\overline b \) fuzzy numbers and x and \(\overline x \) real numbers. \(\overline y \left( x \right)\) is our fuzzy number estimator for the mean of Y (E(Y)) given x, and we show this dependence on x with the notation \(\overline y \left( x \right)\). Now \(\overline x \) is a fixed real number but we may choose new values for x to predict new fuzzy values for E(Y).
James J. Buckley
Chapter 24. Hypothesis Testing in Regression
Abstract
We look at two fuzzy hypothesis tests in this chapter: (1) in the next section H 0: a = a 0 verses H 1: a > a 0 a one-sided test; and (2) in the following section H o: b = 0 verses H 1: b ≠ 0. In both cases we first review the crisp (non-fuzzy) test before the fuzzy test. The non-fuzzy hypothesis tests are based on Sections 7.8 and 7.9 of [1].
James J. Buckley
Chapter 25. Estimation in Multiple Regression
Abstract
We first review the basic theory on crisp multiple linear regression. For simplicity let us work with only two independent variables x 1 and x 2. Our development, throughout this chapter, and the next two chapters, follows [1]. We have some data (x 1i, x 2i, y i), 1 ≤ i ≤ n, on three variables x1, x2 and Y. Notice that we start with crisp data and not fuzzy data. The values of x 1 and x 2 are known in advance and Y is a random variable. We assume that there is no uncertainty in the x 1 and x 2 data. We can not predict a future value of Y with certainty so we decide to focus on the mean of Y, E(Y). We assume that E(Y) is a linear function of x 1 and x 2, say E(Y) = a+bx 1+cx 2. Our model is
$$\mathop Y\nolimits_i = a + b\mathop x\nolimits_{1i} + c\mathop x\nolimits_{2i} + \mathop \in \nolimits_i $$
(25.1)
1 ≤ in. The basic regression equation for the mean of Y is y = a+bx 1+cx 2 and now we wish to estimate the values of a, b and c.
James J. Buckley
Chapter 26. Fuzzy Prediction in Regression
Abstract
From the previous chapter we have our fuzzy regression equation
$$\overline y \left( {\mathop x\nolimits_1 ,\mathop x\nolimits_2 } \right) = \overline a + \overline b \mathop x\nolimits_1 + \overline c \mathop x\nolimits_2 $$
(26.1)
for \(\bar{y}\)(x 1, x 2), with ā, \(\bar{b}\) and \(\bar{c}\) fuzzy numbers and x 1, x 2 real numbers. \(\bar{y}\,=\,({{x}_{1}},{{x}_{2}})\) is our fuzzy number estimator for the mean of Y (E(Y)) given xi and x 2, and we show this dependence on x 1 and x 2 with the notation \(\bar{y}\,=\,({{x}_{1}},{{x}_{2}})\). We may choose new values for x 1 and x 2 to predict new fuzzy values for E(Y).
James J. Buckley
Chapter 27. Hypothesis Testing in Regression
Abstract
We look at two fuzzy hypothesis tests in this chapter: (1) in the next section H0 : b = 0 verses H 1 : b > 0 a one-sided test; and (2) in the third section H 0 : c = 0 verses H1: c ≠ 0 a two-sided test. In both cases we first review the crisp (non-fuzzy) test before the fuzzy test. We could also runs tests on a. However, we will continue to use the data in Table 25.1 were we determined â = −49.3413, so a is definitely negative and a test like H 0 : a = 0 verses H 1 : a < 0 seems a waste of time.
James J. Buckley
Chapter 28. Summary and Questions
Summary
This book introduces basic (elementary) fuzzy statistics based on crisp (non-fuzzy) data. After the Introduction and an brief survey of fuzzy sets (Chapter 2) the book is in three parts: (1) fuzzy estimation in Chapters 3–11; (2) fuzzy hypothesis testing in Chapters 12–20; and (3) fuzzy linear regression in Chapters 21–27. The key to the development is fuzzy estimation.
James J. Buckley
Chapter 29. Maple Commands
Abstract
In this chapter we give some of the Maple commands for the figures in the book. The commands are for Maple 6 but should be good for Maple 7, 8 and 9.
James J. Buckley
Backmatter
Metadaten
Titel
Fuzzy Statistics
verfasst von
Prof. James J. Buckley
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-39919-3
Print ISBN
978-3-642-05924-7
DOI
https://doi.org/10.1007/978-3-540-39919-3