Skip to main content
main-content

Über dieses Buch

Like its predecessor, this second volume presents detailed applications of Bayesian statistical analysis, each of which emphasizes the scientific context of the problems it attempts to solve. The emphasis of this volume is on biomedical applications. These papers were presented at a workshop at Carnegie-Mellon University in 1993.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter

A Bayesian Model for Organ Blood Flow Measurement with Colored Microspheres

Summary
The development of quantitative methods to measure organ blood flow is an active area of research in physiology. Under current protocols radiolabeled microspheres are injected into the circulation of an experimental animal and blood flow to an organ is estimated based on uptake of radioactivity. Growing concerns about environmental pollution, laboratory exposure to radioactivity and the increasing costs of radioactive waste disposal have lead to the development of microspheres labeled with non-radioactive colored markers. Because colored microspheres are new, little research has been devoted to developing statistical methods appropriate for the analysis of data collected from their use. In this paper we present a Bayesian approach to the problem of organ blood flow measurement with colored microspheres. We derive a Poisson-multinomial probability model to describe the organ blood flow protocol. The physical and biological information available to an investigator before an experiment is summarized in a prior probability density which we represent as a product of Dirichlet and log-normal probability densities. We derive the marginal probability density for the flow of blood to an organ conditional on the number of microspheres observed in the counting phase of the experiment. We apply a Monte Carlo Markov chain algorithm to compute the density. The Bayesian approach is used to estimate kidney, heart and total organ blood flow in a colored microsphere study performed in a male New Zealand white rabbit. The results from a Bayesian analysis are compared to those obtained from applying an approximate maximum likelihood procedure based on the delta method. The relation to current methods for organ blood flow estimation is described and directions of future research are detailed.
Emery N. Brown, Adam Sapirstein

Elicitation, Monitoring, and Analysis for an AIDS Clinical Trial

Abstract
Bayesian methods have been a subject of increasing interest among researchers engaged in the interim monitoring and final analysis of clinical trials data. The Bayesian approach is attractive from the standpoint of allowing formal inclusion of relevant external information when deciding whether to stop or continue the trial, but if a single prior distribution is used, the approach may be somewhat limited in its ability to persuade a broad audience.
In this paper we discuss the practical application of Bayesian methodology to the analysis of clinical trial data, with emphasis on issues particularly relevant for AIDS trials. We also report on the performance of this methodology when applied to a trial conducted by the Community Programs for Clinical Research on AIDS (CPCRA), a large multicenter study network sponsored by the National Institutes of Health (NIH). The results indicate which phases of the analysis (e.g. posterior computation and summarization) are well handled using currently available techniques, and which phases (e.g. prior elicitation, interim monitoring, and endpoint determination) require further methodological development. We discuss potential solutions to some of these problem areas, and describe the nature of the research questions for some of those that remain.
Bradley P. Carlin, Kathryn M. Chaloner, Thomas A. Louis, Frank S. Rhame

Accurate Restoration of DNA Sequences

Without Abstract
Gary A. Churchill

Analysis and Reconstruction of Medical Images Using Prior Information

Summary
We propose a Bayesian model for medical image analysis that permits prior structural information to be incorporated into the estimation of image features. Although the proposed methodology can be applied to a broad spectrum of images, we restrict attention here to emission computed tomography (ECT) images, and in particular single photon emission computed tomography (SPECT) images.
Inclusion of prior information regarding likely shapes of objects in the source distribution is accomplished using a hierarchical Bayesian model. A distinguishing feature of this model is that at the lowest level of the hierarchy, a distribution is specified over the class of all partitions of the discretized image scene. The markers used to identify these regions, called region identifiers, are assigned to each voxel (i.e. a volume element, similar to a pixel in 2D Images), and are the mechanism by which information regarding the locations of likely image structures is incorporated into the prior model. Importantly, such information can be incorporated in a non-deterministic fashion, thus permitting prior structural information to be modified by image data with minimal introduction of residual artifacts. Furthermore, the statistical model accommodates the formation of object structures not anticipated a priori, based soley on the observed likelihood function.
Valen Johnson, James Bowsher, Ronald Jaszczak, Timothy Turkington

Contributed Papers

Frontmatter

Combining Information from Multiple Sources in the Analysis of a Non-Equivalent Control Group Design

Abstract
In studies of whether hospital or health-center interventions can improve screening rates for mammography and Pap smears in Los Angeles County, the availability of data from multiple sources makes it possible to combine information in an effort to improve the estimation of intervention effects. Primary sources of information, namely computerized databases that record screening outcomes and some covariates on a routine basis, are supplemented by medical chart reviews that provide additional, sometimes conflicting, assessments of screening outcomes along with additional covariates. Available data can be classified in a large contingency table where, because medical charts were not reviewed for all individuals, some cases can only be classified into a certain margin as opposed to a specific cell. This paper outlines a multiple imputation approach to facilitate data analysis using the framework of Schafer (1991, 1995), which involves drawing imputations from a multinomial distribution with cell probabilities estimated from a loglinear model fitted to the incomplete contingency table. Because of the sparseness of the contingency table, a cavalier choice of a convenient prior distribution can be problematic. The completed data are then analyzed using the method of propensity score sub classification (Rosenbaum and Rubin 1984) to reflect differences in the patient populations at different hospitals or health centers.
Thomas R. Belin, Robert M. Elashoff, Kwan-Moon Leung, Rosane Nisenbaum, Roshan Bastani, Kiumarss Nasseri, Annette Maxwell

Road Closure: Combining Data and Expert Opinion

Abstract
Decisions to close the Little Cottonwood Canyon Highway to vehicular traffic are made by avalanche forecasters. These decisions are based on professional experience and on careful monitoring of the prevailing conditions. Considerable data on weather and snowpack conditions exist. These data are informally employed by the forecasters in the road closure decision but presently they do not use formal statistical methods. This paper attempts a more formal statistical analysis to determine to whether this might facilitate the road closure decision. The conclusion is that the statistical model provides information relevant to the road closure decision that is not identical to that of the experts. When the expert decision is augmented by the statistical information, better decisions are reached compared with decisions based on either the expert opinion alone or the statistical model.
Gail Blattenberger, Richard Fowles

Optimal Design for Heart Defibrillators

Abstract
During heart defibrillator implantation, a physician fibrillates the patient’s heart several times at different test strengths to estimate the effective strength necessary for defibrillation. One strategy is to implant at the strength that de-fibrillates 95% of the time (ED95). Efficient choice and use of the test strengths in this estimation problem is crucial, as each additional observation increases the risk of serious injury or death. Such choice can be formalized as finding an optimal design in, say, a logistic regression problem with interest in estimating the ED95. In practice, important features specific to this problem are the very small sample sizes; the asymmetry of the loss function; and the fact that the prior distribution arises as the distribution for the next draw of patient-specific parameters in a hierarchical model.
Merlise Clyde, Peter Müller, Giovanni Parmigiani

Longitudinal Care Patterns For Disabled Elders: A Bayesian Analysis of Missing Data

Abstract
The growth in the number of dependent elderly and the concurrent decline in the availability of informal caregivers has led to increasing concern that use of publicly-funded long-term care services will rise, as community-residing elders substitute these formal services for informal care. To investigate the extent of service substitution and identify correlates, longitudinal data from a study of elders and their informal caregivers were analyzed. Over 50% of subjects were missing the outcome of interest, although only 8% had no partial outcome data. This paper compares results from three missing-data methods: complete-case analysis, single regression imputation, and multiple regression imputation. Prior assumptions regarding the distribution of the outcome among nonrespondents were varied in order to examine their effect on the estimates. Regardless of the prior assumptions, 22%–23% of elders were estimated to substitute formal services for informal care. Substitution was associated with greater demands on and lower availability of informal caregivers, occurring primarily in situations where the need for care exceeded the capacity of the informal care network. Imputation provided a straightforward means of estimating the probability of substitution among a subgroup of nonrespondents not represented among respondents. In addition, multiple imputation incorporated uncertainty about nonrespondent parameters.
Sybil L. Crawford, Sharon L. Tennstedt, John B. McKinlay

Bayesian Inference For The Mean of a Stratified Population When There Are Order Restrictions

Abstract
Consider a stratified random sample of firms with the strata defined by a firm’s number of employees. We wish to make inference about the mean sales and receipts (SR) for all such firms, and the proportion of firms belonging to each of several classes defined by a firm’s SR. Let P i. denote the known proportion of firms belonging to stratum i, π ij the proportion of firms in i belonging to SR class j, and Y j the mean SR in class j. We use Bayesian methods to make inference about \(P._j = \sum\limits_i {P_{i.} \pi _{ij} } \) and \(\mu = \sum\limits_j {Y_j P._j } \). In particular, specifications of smoothness, expressed as unimodal order relations among the π ij (within and between the rows of the two-way table), are incorporated into the prior distributions. With computations facilitated by using the Gibbs sampler, we show that the smoothness conditions provide substantial gains in precision when the P .j and µ are estimated.
B. Nandram, J. Sedransk

Hierarchical Modelling of Consumer Heterogeneity: An Application to Target Marketing

Abstract
An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data opens the possibility of direct targeting of individual households. Direct marketing activities hinge on the identification of distinct patterns of household behavior (such as loyalty, price sensitivity, or response to feature advertisements) from the consumer panel data for a given household. The goal of this paper is to assess the information content of various standard segmentation schemes in which variation in household behavior is linked to demographic variables versus the information content of individual data. To measure information content, we pose a couponing problem in which a blanket drop is compared to drops targeted on the basis of consumer demographics alone, and finally to a targeted drop which is based on household panel data. We exploit new econometric methods to implement a random coefficient choice model in which the heterogeneity distribution is related to observable demographics.
Peter E. Rossi, Robert E. McCulloch, Greg M. Allenby

Backmatter

Weitere Informationen