Complexity Analysis of Informed MCMC Methods for High-dimensional Model Selection Problems
Informed Markov chain Monte Carlo (MCMC) methods have been proposed as scalable solutions to Bayesian posterior computation on high-dimensional discrete state spaces, but theoretical results about their convergence behavior in general settings are lacking. In this talk, we introduce a novel and generally applicable framework for studying the complexity of local Metropolis-Hastings (MH) algorithms on discrete spaces. For random walk MH algorithms, our bounds are better than the existing ones in the literature; for informed MH algorithms, our method yields the optimal "dimension-free" mixing rate, which serves as the theoretical justification for the use of informed MCMC methods in practice. One example we will discuss is high-dimensional structure learning, a fundamental problem in causal inference and machine learning. On the algorithmic side, we propose two novel informed MCMC algorithms, one based on Metropolis-Hastings sampling and the other based on importance weighting. The talk is based on joint works with J. Yang, D. Vats, G. Roberts, J. Rosenthal, H. Chang and A. Smith.
Dr. Quan Zhou is an Assistant Professor of Statistics at Texas A&M University. He received his Ph.D. in statistical genetics from Baylor College of Medicine in 2017 and then spent two years as a postdoctoral research fellow at the Department of Statistics of Rice University.
His current research centers on Markov chain Monte Carlo sampling and stochastic control and optimal stopping problems. He has also worked on the statistical methodology for variable selection, graphical models and randomized controlled trials, and he is particularly interested in computationally challenging problems arising in genomics.