Home > Research > Publications & Outputs > A discrete bouncy particle sampler

Electronic data

  • dbps_final_whole_AAM

    Accepted author manuscript, 1.22 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License


Text available via DOI:

View graph of relations

A discrete bouncy particle sampler

Research output: Contribution to journalJournal articlepeer-review

E-pub ahead of print
<mark>Journal publication date</mark>26/02/2021
Number of pages13
Publication StatusE-pub ahead of print
Early online date26/02/21
<mark>Original language</mark>English


Most Markov chain Monte Carlo methods operate in discrete time and are reversible with respect to the target probability. Nevertheless, it is now understood that the use of nonreversible Markov chains can be beneficial in many contexts. In particular, the recently-proposed bouncy particle sampler leverages a continuous-time and nonreversible Markov process and empirically shows state-of-the-art performances when used to explore certain probability densities; however, its implementation typically requires the computation of local upper bounds on the gradient of the log target density. We present the discrete bouncy particle sampler, a general algorithm based upon a guided random walk, a partial refreshment of direction, and a delayed-rejection step. We show that the bouncy particle sampler can be understood as a scaling limit of a special case of our algorithm. In contrast to the bouncy particle sampler, implementing the discrete bouncy particle sampler only requires point-wise evaluation of the target density and its gradient. We propose extensions of the basic algorithm for situations when the exact gradient of the target density is not available. In a Gaussian setting, we establish a scaling limit for the radial process as dimension increases to infinity. We leverage this result to obtain the theoretical efficiency of the discrete bouncy particle sampler as a function of the partial-refreshment parameter, which leads to a simple and robust tuning criterion. A further analysis in a more general setting suggests that this tuning criterion applies more generally. Theoretical and empirical efficiency curves are then compared for different targets and algorithm variations.