Skip to main content

2000 | Buch

Monte Carlo Methods in Bayesian Computation

verfasst von: Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim

Verlag: Springer New York

Buchreihe : Springer Series in Statistics

insite
SUCHEN

Über dieses Buch

Sampling from the posterior distribution and computing posterior quanti­ ties of interest using Markov chain Monte Carlo (MCMC) samples are two major challenges involved in advanced Bayesian computation. This book examines each of these issues in detail and focuses heavily on comput­ ing various posterior quantities of interest from a given MCMC sample. Several topics are addressed, including techniques for MCMC sampling, Monte Carlo (MC) methods for estimation of posterior summaries, improv­ ing simulation accuracy, marginal posterior density estimation, estimation of normalizing constants, constrained parameter problems, Highest Poste­ rior Density (HPD) interval calculations, computation of posterior modes, and posterior computations for proportional hazards models and Dirichlet process models. Also extensive discussion is given for computations in­ volving model comparisons, including both nested and nonnested models. Marginal likelihood methods, ratios of normalizing constants, Bayes fac­ tors, the Savage-Dickey density ratio, Stochastic Search Variable Selection (SSVS), Bayesian Model Averaging (BMA), the reverse jump algorithm, and model adequacy using predictive and latent residual approaches are also discussed. The book presents an equal mixture of theory and real applications.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
There are two major challenges involved in advanced Bayesian computation. These are how to sample from posterior distributions and how to compute posterior quantities of interest using Markov chain Monte Carlo (MCMC) samples. Several books, including Tanner (1996), Gilks, Richardson, and Spiegclhaltcr (1996), Gamerman (1997), Robert and Casella (1999), and Gelfand and Smith (2000), cover the development of MCMC sampling. Therefore, this book will provide only a quick but sufficient introduction to recently developed MCMC sampling techniques. In particular, the book will discuss several recently developed and useful computational tools in MCMC sampling which may not be presented in other existing MCMC books including those mentioned above.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
2. Markov Chain Monte Carlo Sampling
Abstract
Recently, Monte Carlo (MC) based sampling methods for evaluating high-dimensional posterior integrals have been rapidly developing. Those sampling methods include MC importance sampling (Hammersley and Handscomb 1964; Ripley 1987; Geweke 1989; Wolpert 1991), Gibbs sampling (Geman and Geman 1984; Gelfand and Smith 1990), Hit-and-Run sampling (Smith 1984; Bélislc, Romeijn, and Smith 1993; Chen 1993; Chen and Schmeiser 1993 and 1996), Metropolis-Hastings sampling (Metropolis et al. 1953; Hastings 1970; Green 1995), and hybrid methods (e.g., Müller 1991; Tierney 1994; Berger and Chen 1993). A general discussion of the Gibbs sampler and other Markov chain Monte Carlo (MCMC) methods is given in the Journal of the Royal Statistical Society, Series B (1993), and an excellent roundtable discussion on the practical use of MCMC can be found in Kass et al. (1998). Other discussions or instances of the use of MCMC sampling can be found in Tanner and Wong (1987), Tanner (1996), Geyer (1992), Gelman and Rubin (1992), Gelfand, Smith, and Lee (1992), Gilks and Wild (1992), and many others. Further development of state-of-the-arts MCMC sampling techniques include the accelerated MCMC sampling of Liu and Sabatti (1998, 1999), Liu (1998), and Liu and Wu (1997), and the exact MCMC sampling of Green and Murdoch (1999). Comprehensive accounts of MCMC methods and their applications may also be found in Meyn and Tweedie (1993), Tanner (1996), Gilks, Richardson, and Spiegel-halter (1996), Robert and Casella (1999), and Gelfand and Smith (2000). The purpose of this chapter is to give a brief overview of several commonly used MCMC sampling algorithms as well as to present selectively several newly developed computational tools for MCMC sampling.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
3. Basic Monte Carlo Methods for Estimating Posterior Quantities
Abstract
The fundamental goal of Bayesian computation is to compute posterior quantities of interest. When the posterior distribution π(¸|D) is high dimensional, one is typically driven to do multidimensional integration to evaluate marginal posterior summaries of the parameters. When a Bayesian model is complicated, analytical or exact numerical evaluation may fail to solve this computational problem. In this regard, the Monte Carlo (MC) method, in particular, the Markov chain Monte Carlo (MCMC) approach, may naturally serve as an alternative solution. Many of the MCMC sampling algorithms discussed in Chapter 2 can be applied here to generate an MCMC sample from the posterior distribution. The main objective of this chapter is to provide a comprehensive treatment of how to use an MCMC sample to obtain MC estimates of posterior quantities. In particular, we will present an overview of several basic MC methods for computing posterior quantities as well as assessing simulation accuracy of MC estimates. In addition, several related issues, including how to obtain more efficient MC estimates, such as the weighted MC estimates, and how to control simulation errors when computing MC estimates, will be addressed.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
4. Estimating Marginal Posterior Densities
Abstract
In Bayesian inference, a joint posterior distribution is available through the likelihood function and a prior distribution. One purpose of Bayesian inference is to calculate and display marginal posterior densities because the marginal posterior densities provide complete information about parameters of interest. As shown in Chapter 2, a Markov chain Monte Carlo (MCMC) sampling algorithm, such as the Gibbs sampler or a Metropolis-Hastings algorithm, can be used to draw MCMC samples from the posterior distribution. Chapter 3 also demonstrates how we can easily obtain posterior quantities such as posterior means, posterior standard deviations, and other posterior quantities from MCMC samples. However, when a Bayesian model becomes complicated, it may be difficult to obtain a reliable estimator of a marginal posterior density based on the MCMC sample. A traditional method for estimating marginal posterior densities is kernel density estimation. Since the kernel density estimator is nonparametric, it may not be efficient. On the other hand, the kernel density estimator may not be applicable for some complicated Bayesian models. In the context of Bayesian inference, the joint posterior density is typically known up to a normalizing constant. Using the structure of a posterior density, a number of authors (e.g., Gelfand, Smith, and Lee 1992; Johnson 1992; Chen 1993 and 1994; Chen and Shao 1997c; Chib 1995; Verdinelli and Wasserman 1995) propose parametric marginal posterior density estimators based on the MCMC sample. In this chapter, we present several available Monte Carlo (MC) methods for computing marginal posterior density estimators, and we also discuss how well marginal posterior density estimation works using the Kullback—Leibler (K—L) divergence as a performance measure.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
5. Estimating Ratios of Normalizing Constants
Abstract
A computational problem arising frequently in Bayesian inference is the computation of normalizing constants for posterior densities from which we can sample. Typically, we are interested in the ratios of such normalizing constants. For example, a Bayes factor is defined as the ratio of posterior odds versus prior odds, where posterior odds is simply a ratio of the normalizing constants of two posterior densities. Mathematically, this problem can be formulated as follows. Let πl (θ), l = 1,2, be two densities, each of which is known up to a normalizing constant:
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
6. Monte Carlo Methods for Constrained Parameter Problems
Abstract
Constraints on the parameters in Bayesian hierarchical models typically make Bayesian computation and analysis complicated. Posterior densities that contain analytically intractable integrals as normalizing constants that depend on the hyperparameters often lead to implementation of Gibbs sampling or Metropolis-Hastings algorithms difficult. In this chapter, we use simulation-based methods via the “reweighting mixtures” of Geyer (1994) to compute posterior quantities of the desired Bayesian posterior distribution.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
7. Computing Bayesian Credible and HPD Intervals
Abstract
One purpose of Bayesian posterior inference is to summarize posterior marginal densities. Graphical presentation of the entire posterior distribution is always desirable if this can be conveniently accomplished. However, summary statistics, which outline important features of the posterior distribution, are sometimes adequate. In particular, when one deals with a high-dimensional problem, graphical display of the entire posterior distribution is generally infeasible, and therefore appropriate summaries of the posterior distribution are most desirable.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
8. Bayesian Approaches for Comparing Nonnested Models
Abstract
Model comparison and model assessment are a crucial part of statistical analysis. Due to recent computational advances, sophisticated techniques for Bayesian model assessment are becoming increasingly popular. We have seen a recent surge in the statistical literature on Bayesian methods for model assessment and model comparison, including articles by George and McCulloch (1993), Madigan and Raftery (1994), Ibrahim and Laud (1994), Laud and Ibrahim (1995), Kass and Raftery (1995), Chib (1995), Chib and Greenberg (1998), Raftery, Madigan, and Volinsky (1995), George, McCulloch, and Tsay (1996), Raftery, Madigan, and Hooting (1997), Gelfand and Ghosh (1998), Clyde (1999), Sinha, Chen, and Ghosh (1999), Ibrahim, Chen, and Sinha (1998), Ibrahim and Chen (1998), Ibrahim, Chen, and MacEachern (1999), and Chen, Ibrahim, and Yiannoutsos (1999). The scope of Bayesian model comparison and model assessment is quite broad, and can be investigated via Bayes factors, model diagnostics, and goodness of fit measures. In many situations, one may want to compare several models which are not nested. In this particular context, we consider model comparison using marginal likelihood approaches, “super-model” or “sub-model” approaches, and criterion-based methods. The computational techniques involved in these three approaches will be addressed in detail. In addition, several important applications, including scale mixtures of multi-variate normal link models and Bayesian discretized scmiparametric models for interval-censored data, will be presented.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
9. Bayesian Variable Selection
Abstract
Variable selection is one of the most frequently encountered problems in statistical data analysis. In cancer or AIDS clinical trials, for example, one often wishes to assess the importance of certain prognostic factors such as treatment, age, gender, or race in predicting survival outcome. Most of the existing literature addresses variable selection using criterion-based methods such as the Akaike Information Criterion (AIC) (Akaike 1973) or Bayesian Information Criterion (BIC) (Schwarz 1978). Fully Bayesian approaches to variable selection for these models are now feasible due to recent advances in computing technology and the development of efficient computational algorithms. The Bayesian approach to variable selection is straightforward in principle. One quantifies the prior uncertainties via probabilities for each model under consideration, specifies a prior distribution for each of the parameters in each model, and then uses Bayes’ theorem to calculate posterior model probabilities.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
10. Other Topics
Abstract
In this last chapter of the book, we discuss several other related Monte Carlo methods commonly used in Bayesian computation. More specifically, we present various Bayesian methods for model adequacy and related computational techniques, including Monte Carlo estimation of Conditional Predictive Ordinates (CPO) and various Bayesian residuals. This chapter also provides a detailed treatment of the computation of posterior modes, and sampling from posterior distributions for proportional hazards models and mixture of Dirichlet process models.
Ming-Hui Chen, Qi-Man Shao, Joseph G. Ibrahim
Backmatter
Metadaten
Titel
Monte Carlo Methods in Bayesian Computation
verfasst von
Ming-Hui Chen
Qi-Man Shao
Joseph G. Ibrahim
Copyright-Jahr
2000
Verlag
Springer New York
Electronic ISBN
978-1-4612-1276-8
Print ISBN
978-1-4612-7074-4
DOI
https://doi.org/10.1007/978-1-4612-1276-8