Skip to main content
main-content

Über dieses Buch

To celebrate Peter Huber's 60th birthday in 1994, our university had invited for a festive occasion in the afternoon of Thursday, June 9. The invitation to honour this outstanding personality was followed by about fifty colleagues and former students from, mainly, allover the world. Others, who could not attend, sent their congratulations by mail and e-mail (P. Bickel:" ... It's hard to imagine that Peter turned 60 ... "). After a welcome address by Adalbert Kerber (dean), the following lectures were delivered. Volker Strassen (Konstanz): Almost Sure Primes and Cryptography -an Introduction Frank Hampel (Zurich): On the Philosophical Foundations of Statistics 1 Andreas Buja (Murray Hill): Projections and Sections­ High-Dimensional Graphics for Data Analysis. The distinguished speakers lauded Peter Huber a hard and fair mathematician, a cooperative and stimulating colleague, and an inspiring and helpful teacher. The Festkolloquium was surrounded with a musical program by the Univer­ 2 sity's Brass Ensemble. The subsequent Workshop "Robust Statistics, Data Analysis and Computer Intensive Methods" in Schloss Thurnau, Friday until Sunday, June 9-12, was organized about the areas in statistics that Peter Huber himself has markedly shaped. In the time since the conference, most of the contributions could be edited for this volume-a late birthday present-that may give a new impetus to further research in these fields.

Inhaltsverzeichnis

Frontmatter

Bootstrap Variable—Selection and Confidence Sets

Abstract
This paper analyzes estimation by bootstrap variable-selection in a simple Gaussian model where the dimension of the unknown parameter may exceed that of the data. A naive use of the bootstrap in this problem produces risk estimators for candidate variable-selections that have a strong upward bias. Resampling from a less overfitted model removes the bias and leads to bootstrap variable-selections that minimize risk asymptotically. A related bootstrap technique generates confidence sets that are centered at the best bootstrap variable-selection and have two further properties: the asymptotic coverage probability for the unknown parameter is as desired and the confidence set is geometrically smaller than a classical competitor. The results suggest a possible approach to confidence sets in other inverse problems where a regularization technique is used.
Rudolf Beran

Robust Estimation in the Logistic Regression Model

Abstract
A new class of robust and Fisher-consistent M-estimates for the logistic regression models is introduced. We show that these estimates are consistent and asymptotically normal. Their robustness is studied through the computation of asymptotic bias curves under point-mass contamination for the case when the covariates follow a multivariate normal distribution. We illustrate the behavior of these estimates with two data sets. Finally, we mention some possible extensions of these M-estimates for a multinomial response.
Ana M. Bianco, Víctor J. Yohai

The m out of n Bootstrap and Goodness of Fit Tests with Double Censored Data

Abstract
This paper considers the use of the m out of n bootstrap (Bickel, Götze, and van Zwet, 1994) in setting critical values for Cramér-von Mises goodness of fit tests with doubly censored data. We show that, as might be expected, the usual n out of n nonparametric bootstrap fails to estimate the null distribution of the test statistic. We show that if the m out of n bootstrap with m → ∞, m = o(n) is used to set the critical value of the test, the proposed testing procedure is asymptotically level α, has the correct asymptotic power function for \(\sqrt n\) alternatives and is asymptotically consistent.
Peter J. Bickel, Jian-Jian Ren

What Criterion for a Power Algorithm?

Abstract
If Breiman and Friedman’s ACE algorithm is used with smoothing splines as building blocks, the question arises what criterion is being optimized. The obvious guess turns out to be wrong. The problem is that the building blocks are no longer orthogonal projections in the usual inner product. We describe the non-standard inner product that does yield orthogonal projections, and we indicate what the resulting “penalized R 2”-criterion is.
Andreas Buja

Nonparametric Estimation of Global Functionals of Conditional Quantiles

Abstract
For fixed α ∈ (0,1), the quantile regression function gives the αth quantile θ α (x) in the conditional distribution of a response variable Y given the value X = x of a vector of covariates. It can be used to measure the effect of covariates not only in the center of a population, but also in the upper and lower tails. When there are many covariates, the curse of dimensionality makes accurate estimation of the quantile regression function difficult. A functional that escapes this curse, at least asymptotically, and summarizes key features of the quantile specific relationship between X and Y is the vector β α of weighted expected values of the vector of partial derivatives of the quantile function θ α(x). In a nonparametric setting, β α can be regarded as the vector of quantile specific nonparametric regression coefficients while in semiparametric transformation and single index models, β α gives the direction of the parameter vector in the parametric part of the model. We show that, under suitable regularity conditions, the estimate of β α obtained by using the locally polynomial quantile estimate of Chaudhuri (1991a), is \(\sqrt n\) consistent and asymptotically normal with asymptotic variance equal to the variance of the influence function of the functional β α. We discuss how the estimates of β α can be used for model diagnostics and in the construction of a link function estimates in general single index models.
Probal Chaudhuri, Kjell Doksum, Alexander Samarov

High Breakdown Point Estimators in Logistic Regression

Abstract
Estimators with high finite sample breakdown points are of special interest in robust statistics. However, in contrast to estimation in linear regression models the breakdown point approach have not yet received much attention in logistic regression models. Although various robust estimators have been proposed in logistic regression models, their breakdown points are often not yet known. Here it is shown for logistic regression models with binary data that there is no estimator with a high finite sample breakdown point, provided the estimator has to fulfill a weak condition. However, in logistic regression models with large strata modifications of Rousseeuw’s least median of squares estimator and least trimmed squares estimator have finite sample breakdown points of approximately 1/2. Both estimators are strongly consistent under a large supermodel of the logistic regression model. Existing programs can be used to compute such estimates.
Andreas Christmann

The Guinea Pig of Multiple Regression

Abstract
The role played by a 21×4 data set in the development (progress and regress) of multiple regression is discussed. While this ‘famous’ data set has been (and is being) used as a guinea pig1 for almost every method of estimation introduced in the regression market, it appears that no one has questioned the origin and correctness in the last 30 years. An attempt is made to clarify some points in this regard. In particular, it is argued here that this data set is in fact a subset of a longer data set of which the rest is missing.
Yadolah Dodge

When is a p-Value a Good Measure of Evidence?

Abstract
Blyth and Staudte (1993, 1995a) introduced the notion of a level-α weight of evidence for an alternative to a hypothesis; it is an estimator of the ideal power function which gives for each experimental outcome a measure of how safe it is to reject the hypothesis in favour of the alternative. It is shown here that for a simple alternative to a one-sided hypothesis regarding a normal mean, the minimum risk level- α measure of evidence approaches the Neyman-Pearson level- α test critical function as the simple alternative moves to infinity. This result is not true for shift parameters of all distributions, as the Huber least-favourable example illustrates.
This result also suggests that for normal data when the p-value is compared with other measures of evidence of its own level, it will be found wanting, except for distant alternatives; plots of the corresponding risk functions confirm this.
Acceptability profiles (Blyth and Staudte (1993,1995a)) are derived from a family of measures of evidence in the same way that confidence intervals are derived from a family of tests. Acceptability profiles derived from p-values and other measures of evidence are compared.
Michael B. Dollinger, Elena Kulinskaya, Robert G. Staudte

Geometric Properties of Principal Curves in the Plane

Abstract
Principal curves were introduced to formalize the notion of “a curve passing through the middle of a dataset”. Vaguely speaking, a curve is said to pass through the middle of a dataset if every point on the curve is the average of the observations projecting onto it. This idea can be made precise by defining principal curves for probability densities. Principal curves can be regarded as a generalization of linear principal components—if a principal curve happens to be a straight line, then it is a principal component. In this paper we study principal curves in the plane. We show that principal curves are solutions of a differential equation. By solving this differential equation, we find principal curves for uniform densities on rectangles and annuii. There are oscillating solutions besides the obvious straight and circular ones, indicating that principal curves in general will not be unique. If a density has several principal curves, they have to cross, a property somewhat analogous to the orthogonality of principal components. Finally, we study principal curves for spherical and elliptical distributions.
Tom Duchamp, Werner Stuetzle

On Robust Estimation of Variograms in Geostatistics

Abstract
The distribution of regionalized variables in the field of spatial statistics is mainly characterized by the variogram (or the covariance-function), which essentially is the variance of the differences of the variables in space. The usual estimate (empirical variance) of this function is highly non-robust.
Several alternative (robust) proposals for the estimation can be found in the literature. The definition often does not follow an obvious intuition (e.g., the estimator of Cressie and Hawkins). The present paper considers different estimators of the variogram (including the application of very new scale estimators). The investigation is mainly done by simulation in the one-dimensional case: data on regular and irregular grids. The results are rather surprising. The behavior of the estimators sometimes is unexpected. The main reasons seem to be the high dependence between the data values and the small sample size if the spatial distribution is irregular, both, however, correspond to practical situations often met.
Rudolf Dutter

Advances in Nonparametric Function Estimation

Abstract
This paper has three aims: First, to demonstrate the need for nonparametric curve fitting techniques and to provide an introduction to such statistical techniques. Second, whatever method we want to apply, selection of a smoothing parameter or bandwidth is required. It would be desirable to determine the appropriate bandwidth from the data themselves, tuned to the problem at hand. Considerable progress has been made in recent years, most notably by the introduction of “plug-in” selectors, and the essence of this development is summarized. Third, a variety of function estimators has been discussed in the literature. Quite recently, local polynomial fitting stirred a lot of interest, seemed to become the ultimate answer. The potential pitfalls of these estimators and some remedies are discussed.
Theo Gasser

On the Philosophical Foundations of Statistics: Bridges to Huber’s work, and recent results

Abstract
After a few words in honor of P.J. Huber and a brief introduction into a new frequentist paradigm for the foundations of statistics, which uses upper and lower probabilities and the concept of “successful bets”, the talk discusses some very recent research results with more general implications: Successful bets for the normal and the exponential shift model, asymptotically successful bets, and a sketch for approximately successful bets in the context of robust statistics.
Frank Hampel

Data Based Prototyping

Abstract
Data Based Prototyping is a method for bringing massive sets of raw data into an analyzable form and to customize an analysis system around the data. To obtain optimal results, data preparation should be geared towards the types of analyses planned and therefore should be done by experienced data analysts and not programmers. In this paper we outline the key ingredients of Data Based Prototyping and illustrate these with two case studies. The first of the two shows how 1 gigabyte of general hospitalization statistics can be made available for epidemiological studies, while the second example illustrates how data based prototyping can and should be applied to the development of complex information systems.
Thomas M. Huber, Matthias Nagel

Robust Regression with a Categorical Covariable

Abstract
A fast algorithm is presented for robust estimation of a linear model with a distributed intercept. This is a regression model in which the data set contains groups with the same slopes but different intercepts, a situation which often occurs in economics. In each group, the algorithm first looks for outliers in (x,y) -space by means of a robust projection method. Then a modified version of the resampling technique is applied to the whole data set, in order to find an approximation to least median of squares or other regression methods with a positive breakdown point. Because of the preliminary projections, the number of subsets may be drastically reduced. Simulations and examples show that the overall computation time is substantially lower than that of the straightforward algorithm. The method is illustrated with a real data set.
Mia Huber, Peter J. Rousseeuw

Robustness in Discriminant Analysis

Abstract
The problems of discriminant analysis in ℝN for L classes are considered for the situations, when hypothetical (classical) model of data is distorted. Classification of distortion types is given. Robustness of classical decision rules is evaluated to the distortions of probability density functions of observations to be classified, and robust decision rules are constructed.
Y. S. Kharin

M-Estimation and Spatial Quantiles

Abstract
An extension of quantile function (M-quantiles) related in a certain sense to M-parameters of probability distribution defined by convex minimization is considered. Asymtotics of empirical M-quantiles are studied.
V. Koltchinskii

Robust Statistical Methods in Interlaboratory Analytical Studies

Abstract
Interlaboratory analytical study is the general term of an experiment organised by a committee and involving several laboratories to achieve a common goal. Two important types of studies are the method-performance studies and the laboratory-performance studies. The purpose of a method-performance study is to determine the precision and bias characteristics of an analytical test method. A laboratory-performance study ascertains whether the laboratories conform to stated standards in their testing activities. An iterative and a non-iterative method to calculate the estimates in a method-performance study are presented and a new method based on a score-function allows to characterise the performance of laboratories both as groups and individually. This score is a squared Mahalanobis distance with robust estimates of means and covariances. For the latters’ determination the specific structure of the interlaboratory-test data is taken into account. Instructive graphical displays support the classification of the laboratories.
Peter Lischer

Estimating Distributions with a Fixed Number of Modes

Abstract
A new approach for non- or semi-parametric density estimation allows to specify modes and antimodes. The new smoother is a Maximum Penalized Likelihood (MPL) estimate with a novel roughness penalty. It penalizes a relative change of curvature which allows considering modes and inflection points. For a given number of modes, the score function, \(l^\prime = (\log f)^\prime\) can be represented as \({l^\prime }\left( x \right) = \pm (x - {w_1}) \cdots (x - {w_m})\cdot{\text{ }}exp{h_l}(x)\), a semiparametric term with parameters w j (model order m) and nonparametric part h l(·). The MPL variational problem is equivalent to a differential equation with boundary conditions. The exponential and normal distributions are smoothest limits of the new estimator for zero and one mode, respectively.
Martin B. Mächler

Implementing M-Estimators of the Gamma Distribution

Abstract
Several types of M-estimators for multiparameter models have been proposed in Hampel et al. (1986). This paper considers 6 options and specializes them to the Gamma distribution, with the purpose of estimating its mean. It discusses their implementation; i.e., the numerical computation of the estimates, the determination of their tuning parameters, and the evaluation of measures that characterize their distribution (e.g., bias, variance, breakdown point, …). One of the options is selected for practical use.
A. Marazzi, C. Ruffieux

Constrained M-Estimation for Regression

Abstract
When using redescending M-estimates of regression, one must choose not only an estimate of scale, but since the redescending M-estimating equations may admit multiple solutions, of which all of them may not be a desired solution, one must also have a method for choosing a desirable solution to the estimating equations. We introduce here a new approach for properly scaling redescending M-estimating equations and for obtaining high breakdown point solutions to the equations by the introduction of the constrained M-estimates of regression, or the CM-estimates of regression for short. Unlike the S-estimates of regression, the CM-estimates of regression can be tuned to obtain good local robustness properties while maintaining a breakdown point of 1/2.
Beatriz Mendes, David E. Tyler

Inference for the Direction of the Larger of Two Eigenvectors: The Case of Circular Elongation

Abstract
The paper discusses interval estimation of the dominant eigenvector for two-dimensional data. We derive adjustments for both the classical asymptotic formula of Anderson (1963) and two jackknife estimators and exhibit the finite sample behavior of the resulting interval (wedge) estimators in a variety of situations, including nonGaussian and non-circular parent distributions.
Stephan Morgenthaler, John W. Tukey

High Breakdown Point Designs

Abstract
A linear model is assumed where the experimental conditions are given by the experimenter so that they are without gross errors. For this situation we regard h-trimmed Lp-estimators which generalize the least median of squares estimator and the least trimmed squares estimators. The breakdown point of these estimators depends strongly on the underlying design and breakdown point maximizing estimators can be derived within these estimators. Also breakdown point maximizing designs can be derived. But these designs are often very different from the classically optimal designs which ensure efficiency also for robust procedures. Therefore we regard compromises providing high breakdown point and high efficiency designs.
Christine H. Müller

On Bayesian Robustness: An Asymptotic Approach

Abstract
This paper presents a new asymptotic approach to study the robustness of Bayesian inference to changes on the prior distribution. We study the robustness of the posterior density score function when the uncertainty about the prior distribution has been restated as a problem of uncertainty about the model parametrization. Classical robustness tools, such as the influence function and the maximum bias function, are defined for uniparametric models and calculated for the location case. Possible extensions to other models are also briefly discussed.
Daniel Peña, Ruben H. Zamar

Computational Aspects of Trimmed Single-Link Clustering

Abstract
The main drawback of the single-link method is the chaining effect. A few observations between clusters can create a chain, i.e. a path of short edges joining the real clusters and thus making their single-link distances small. It has been suggested (Tabakis (1992a)) that a density estimator could help by eliminating those low-density observations creating the problem. Since the amount of trimming necessary is not known, we may have to compute and compare several minimal spanning trees using standard algorithms such as Prim’s algorithm. In this paper we are concerned with the computational aspects of the problem. In particular, we prove that trimmed single-link can be applied on n points in O(n 2) time, i.e. its time complexity does not exceed that of the classical single-link method. We also discuss the space requirements of the method.
Evangelos Tabakis

Interval Probability on Finite Sample Spaces

Abstract
In the seventies and early eighties theory of interval probability has gained very much from contributions of Peter Huber. The present article gives an outline of an approach aiming to establish unifying foundations of that theory. A system of axioms for interval probability is described generalizing Kolmogorov’s axioms for classical probability and distinguishing two types of interval probability. The first one (R-probability) is characterized by employing interval-limits which are not self-contradictory, while the interval-limits of the second (F-probability) fit exactly to the set of classical probabilities in accordance to these interval-limits (structure). The task of controlling whether an assignment of intervals to each random event of a finite sample space represents at least R-probability or even F-probability is done by means of Linear Programming. The possibility of characterizing R-probability by features of its structure is sketched. A thorough presentation of theory will be given in a forthcoming book by the author.
Kurt Weichselberger

On Consistency of Recursive Multivariate M-Estimators in Linear Models

Abstract
Strong consistency of recursive M-estimators of regression coefficients and scatter parameters in multivariate linear regression models is proved when the designs are nonrandom matrices satisfying \(\frac{1}{n} \textstyle \sum_{i=1}^n X_i^\prime X_i \rightarrow Q > 0\).
Yuehua Wu

Backmatter

Weitere Informationen