Home > Research > Publications & Outputs > Piecewise deterministic Markov processes for sc...

Electronic data

  • STAPRO-PDP

    Rights statement: This is the author’s version of a work that was accepted for publication in Statistics and Probability Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Statistics and Probability Letters, 136, 2018 DOI: 10.1016/j.spl.2018.02.021

    Accepted author manuscript, 442 KB, PDF-document

    Embargo ends: 2/03/19

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

Keywords

View graph of relations

Piecewise deterministic Markov processes for scalable Monte Carlo on restricted domains

Research output: Contribution to journalJournal article

Published
  • Joris Bierkens
  • Alexandre Bouchard-Côté
  • Arnaud Doucet
  • Andrew B. Duncan
  • Paul Fearnhead
  • Thibaut Lienart
  • Gareth Roberts
  • Sebastian J. Vollmer
Close
<mark>Journal publication date</mark>05/2018
<mark>Journal</mark>Statistics and Probability Letters
Volume136
Number of pages7
Pages (from-to)148-154
<mark>State</mark>Published
Early online date2/03/18
<mark>Original language</mark>English

Abstract

Piecewise deterministic Monte Carlo methods (PDMC) consist of a class of continuous-time Markov chain Monte Carlo methods (MCMC) which have recently been shown to hold considerable promise. Being non-reversible, the mixing properties of PDMC methods often significantly outperform classical reversible MCMC competitors. Moreover, in a Bayesian context they can use sub-sampling ideas, so that they need only access one data point per iteration, whilst still maintaining the true posterior distribution as their invariant distribution. However, current methods are limited to parameter spaces of real d-dimensional vectors. We show how these algorithms can be extended to applications involving restricted parameter spaces. In simulations we observe that the resulting algorithm is more efficient than Hamiltonian Monte Carlo for sampling from truncated logistic regression models. The theoretical framework used to justify this extension lays the foundation for the development of other novel PDMC algorithms.

Bibliographic note

This is the author’s version of a work that was accepted for publication in Statistics and Probability Letters. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Statistics and Probability Letters, 136, 2018 DOI: 10.1016/j.spl.2018.02.021