Data Augmentation MCMC for Bayesian Inference from Privatized Data
Differentially private mechanisms protect privacy by introducing additional randomness into the data. When the data analyst has access only to the privatized data, it is a challenge to perform valid statistical inference on parameters underlying the confidential data. Specifically, the likelihood function of the privatized data requires integrating over the large space of confidential databases and is typically intractable. For Bayesian analysis, this results in a posterior distribution that is doubly intractable, rendering traditional MCMC techniques inapplicable. We propose an MCMC framework to perform Bayesian inference from the privatized data, which is applicable to a wide range of statistical models and privacy mechanisms. Our MCMC algorithm augments the model parameters with the unobserved confidential data, and alternately updates each one conditional on the other. For the potentially challenging step of updating the confidential data, we propose a generic approach that exploits the privacy guarantee of the mechanism to ensure efficiency. We give results on computational complexity, acceptance rate, and mixing properties of our MCMC. This talk is based on joint work with Jordan Awan, Robin Gong, and Vinayak Rao (https://arxiv.org/abs/2206.00710, NeurIPS 2022).
Dr. Nianqiao (Phyllis) Ju is an assistant professor of statistics at Purdue University. Her research focuses on Bayesian inference and computational methods.