Convergence rates and asymptotic standard errors for Markov chain Monte Carlo algorithms for Bayesian probit regression

Consider a probit regression problem in which Y₁, [ellipsis (horizontal)], Yn are independent Bernoulli random variables such that [graphic removed] where xi is a p-dimensional vector of known covariates that are associated with Yi, β is a p-dimensional vector of unknown regression coefficients and...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the Royal Statistical Society. Series B, Statistical methodology Vol. 69; no. 4; pp. 607 - 623
Main Authors: Roy, Vivekananda, Hobert, James P
Format: Journal Article
Language:English
Published: Oxford, UK Oxford, UK : Blackwell Publishing Ltd 01.09.2007
Blackwell Publishing Ltd
Blackwell Publishers
Blackwell
Royal Statistical Society
Oxford University Press
Series:Journal of the Royal Statistical Society Series B
Subjects:
ISSN:1369-7412, 1467-9868
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Consider a probit regression problem in which Y₁, [ellipsis (horizontal)], Yn are independent Bernoulli random variables such that [graphic removed] where xi is a p-dimensional vector of known covariates that are associated with Yi, β is a p-dimensional vector of unknown regression coefficients and Φ(·) denotes the standard normal distribution function. We study Markov chain Monte Carlo algorithms for exploring the intractable posterior density that results when the probit regression likelihood is combined with a flat prior on β. We prove that Albert and Chib's data augmentation algorithm and Liu and Wu's PX-DA algorithm both converge at a geometric rate, which ensures the existence of central limit theorems for ergodic averages under a second-moment condition. Although these two algorithms are essentially equivalent in terms of computational complexity, results of Hobert and Marchev imply that the PX-DA algorithm is theoretically more efficient in the sense that the asymptotic variance in the central limit theorem under the PX-DA algorithm is no larger than that under Albert and Chib's algorithm. We also construct minorization conditions that allow us to exploit regenerative simulation techniques for the consistent estimation of asymptotic variances. As an illustration, we apply our results to van Dyk and Meng's lupus data. This example demonstrates that huge gains in efficiency are possible by using the PX-DA algorithm instead of Albert and Chib's algorithm.
Bibliography:http://dx.doi.org/10.1111/j.1467-9868.2007.00602.x
ArticleID:RSSB602
ark:/67375/WNG-QMVRZGSL-W
istex:FA04BA5D1AB979E2E9FC3B8356725608727B20ED
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-2
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1369-7412
1467-9868
DOI:10.1111/j.1467-9868.2007.00602.x