Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Introduction and Summary

Chapter 1. Introduction and Summary

Abstract
The term “robustness” does not lend itself to a clear-cut statistical definition. It seems to have been introduced by G. E. P. Box in 1953 to cover a rather vague concept described in the following way by Kendall and Buckland (1981). Their dictionary states:
Robustness. Many test procedures involving probability levels depend for their exactitude on assumptions concerning the generating mechanism, e. g. that the parent variation is Normal (Gaussian). If the inferences are little affected by departure from those assumptions, e. g. if the significance points of a test vary little if the population departs quite substantially from the Normality the test on the inferences are said to be robust. In a rather more general sense, a statistical procedure is described as robust if it is not very sensitive to departure from the assumptions on which it depends.
This quotation clearly associates robustness with applicability of the various statistical procedures. The two complementary questions met with can be expressed as follows: first, how wide is the field of application of a given statistical procedure or, equivalently, is it robust against some departure from the assumptions? Second, how should we design a statistical procedure to be robust or, in other terms, to remain reliable in spite of possible uncertainty in the available information?
William J. J. Rey

The Theoretical Background

Chapter 2. Sample spaces, distributions, estimators …

Abstract
In the following sections, we present a large body of theory which still is in its infancy. We could have strived at rigor, generality and completeness but, then, we would have stuck halfway in conjectures. The main issues are not yet clear. For simplicity, we have decided to restrict ourselves to what we need, leaving to the experts the task of extending and improving the main arguments. We hope that, sooner or later, some motivated analysts highly qualified in topology will sustain the investigations by von Mises and Prokhorov. We ourselves resort to the elementary tools of mathematical analysis.
William J. J. Rey

Chapter 3. Robustness, breakdown point and influence function

Abstract
In this section, we shall restate the viewpoint of Hampel (1968 – 1971) in different words. Eventually, at the end of § 3.3, we will be in a position to give a more convenient delimitation for the concept of robustness. In this work, we keep to Hampel’s viewpoints although they are more and more frequently open to fundamental criticisms. Several attempts have been made to define robustness in more manageable terms, but no serious breakthrough has been achieved thus far. We feel that some new views will gradually gain credit and that it might be in the line proposed by Rieder (1980).
William J. J. Rey

Chapter 4. The jackknife method

Abstract
The so-called jaekknife method has been introduced by Quenouille to reduce a bias in estimation and then progressively extended to obtain an estimate of variances. It is interesting in that it yields an improved estimator and enables the estimator to be assessed in an inexpensive way, that is with regard to the methodology although the computation, if required, is not necessarily inexpensive. By all standards the results thus obtained are impressive in most cases; but in a few, not very well defined, circumstances the results are either poor or ridiculous.
William J. J. Rey

Chapter 5. Bootstrap methods, sampling distributions

Abstract
In two similar papers, Efron (1979a, 1979b) has advanced views on the very old problem of estimation. Do we need distribution models? Should we rely on distribution models? His answer is a definite challenge and could give a new sense to the fundamental concepts. Let the sample speak for itself and do not interfere with extraneous assumptions. This is strictly consistent with our own views.
William J. J. Rey

Part B

Frontmatter

Chapter 6. Type M estimators

Abstract
The type M estimators, also called M estimators, are generalizations of the usual maximum likelihood estimates. ϑ is classically the parameter value maximizing the likelihood function, i. e. we have in obvious notation
$$ L = \Pi f({x_i}|\vartheta ) = \max {\rm{for }}\vartheta $$
or equivalently
$$ - \ln {\rm{ }}L{\rm{ = - }}\sum {\rm{ ln }}f {\rm{(}}{x_i}|\vartheta ) = {\rm{ min for }}\vartheta {\rm{.}} $$
The estimators of type M are solutions of the more general structure
$$M\; = \;\Sigma \rho ({x_i},\vartheta ) = {\rm{min for }}\vartheta {\rm{.}}$$
(6.1)
where the function ρ(.) may be rather arbitrary. Before proceeding, we note that the above structure is rarely appropriate to process correlated observations.
William J. J. Rey

Chapter 7. Type L estimators

Abstract
In this chapter we will be concerned with estimators that are “linear combinations of order statistics” in the strict sense of the words. A more general view will be given with the quantile estimators of Chap. 10, which are such that a given quantile of some error distribution is minimized.
William J. J. Rey

Chapter 8. Type R estimator

Abstract
In the 1972 Wald lecture, Huber has been mainly concerned with robust location estimators. In order to present all possible estimator structures, he introduced three classes, M, L and R. We are now interested in the members of the third class, i. e. the estimators that have been obtained from Rank tests. They will be introduced according to Huber’s viewpoint and therefore have little in common with the R estimators of Hogg (1979 a, b); the latter estimators can be regarded as an intelligent blend of M- and L estimators.
William J. J. Rey

Chapter 9. Type MM estimators

Abstract
The MM estimators are generalizations of the type M estimators we have discussed in Chapter 6. The fact that they occur much more frequently than the simple M estimators is the main justification to treat them separately. It is especially the computational procedures that are usually oriented in such a way that an approximate factorization of the corresponding type M estimation problem takes place (see §9.8); this often permits the computer load to be reduced. We now introduce these MM estimators with frequent references to Chapter 6.
William J. J. Rey

Chapter 10. Quantile estimators and confidence intervals

Abstract
We have met various techniques suitable for making point estimates in a more or less robust way; we know how to evaluate the precision of the estimators by their variance-covariance matrices and we even have the third moments. However we only have very limited information on the distributions of the estimators and can hardly draw any inference as to the sample space. Inferences on the sample space will be of first concern and then we will return to distributional aspects.
William J. J. Rey

Chapter 11. Miscellaneous

Abstract
To some readers it may appear odd that we have not yet considered what are the outliers and how to handle them, whereas others may believe that thus far we have only been concerned with the question of how to treat them. Both attitudes are correct inasmuch as they are complementary; we have accepted any outliers and, in doing so, we have taken in account a loss of efficiency to be less dependent on them than we would be otherwise. Let us remember that our main interest is in having robust methods for comparison with non-robust standard methods. All robust methods handle outliers without having to specify whether observations are outlying or not.
William J. J. Rey

Chapter 12. References

Without Abstract
William J. J. Rey

Backmatter

Weitere Informationen