2011 | OriginalPaper | Chapter
The Computational Complexity of Estimating MCMC Convergence Time
Authors : Nayantara Bhatnagar, Andrej Bogdanov, Elchanan Mossel
Published in: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
An important problem in the implementation of Markov Chain Monte Carlo algorithms is to determine the convergence time, or the number of iterations before the chain is close to stationarity. For many Markov chains used in practice this time is not known. There does not seem to be a general technique for upper bounding the convergence time that gives sufficiently sharp (useful in practice) bounds in all cases of interest. Thus, practitioners like to carry out some form of statistical analysis in order to assess convergence. This has led to the development of a number of methods known as convergence diagnostics which attempt to diagnose whether the Markov chain is far from stationarity. We study the problem of testing convergence in the following settings and prove that the problem is hard computationally:
Given a Markov chain that mixes rapidly, it is hard for Statistical Zero Knowledge (SZK-hard) to distinguish whether starting from a given state, the chain is close to stationarity by time
t
or far from stationarity at time
ct
for a constant
c
. We show the problem is in AM ∩ coAM.
Given a Markov chain that mixes rapidly it is coNP-hard to distinguish from an arbitrary starting state whether it is close to stationarity by time
t
or far from stationarity at time
ct
for a constant
c
. The problem is in coAM.
It is PSPACE-complete to distinguish whether the Markov chain is close to stationarity by time
t
or still far from stationarity at time
ct
for
c
≥ 1.