Home > Research > Publications & Outputs > On the asymptotic efficiency of approximate Bay...

Electronic data

  • ABCasymptotics

    Rights statement: This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Biometrika following peer review. The definitive publisher-authenticated version Wentao Li, Paul Fearnhead; On the asymptotic efficiency of approximate Bayesian computation estimators, Biometrika, Volume 105, Issue 2, 1 June 2018, Pages 285–299, https://doi.org/10.1093/biomet/asx078 is available online at: https://academic.oup.com/biomet/article/105/2/285/4818354

    Accepted author manuscript, 357 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

  • Supplementary Material

    Accepted author manuscript, 284 KB, PDF document

Links

Text available via DOI:

View graph of relations

On the asymptotic efficiency of approximate Bayesian computation estimators

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On the asymptotic efficiency of approximate Bayesian computation estimators. / Li, Wentao; Fearnhead, Paul.
In: Biometrika, Vol. 105, No. 2, 01.06.2018, p. 285-299.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Li W, Fearnhead P. On the asymptotic efficiency of approximate Bayesian computation estimators. Biometrika. 2018 Jun 1;105(2):285-299. Epub 2018 Jan 20. doi: 10.1093/biomet/asx078

Author

Bibtex

@article{a9d7ffbea64d42a6985daedc1b2f369e,
title = "On the asymptotic efficiency of approximate Bayesian computation estimators",
abstract = "Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data is summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a lineartransformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.",
keywords = "Approximate Bayesian computation, Asymptotics, Dimension reduction , Importance Sampling, Partial Information, Proposal distribution",
author = "Wentao Li and Paul Fearnhead",
note = "This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Biometrika following peer review. The definitive publisher-authenticated version Wentao Li, Paul Fearnhead; On the asymptotic efficiency of approximate Bayesian computation estimators, Biometrika, Volume 105, Issue 2, 1 June 2018, Pages 285–299, https://doi.org/10.1093/biomet/asx078 is available online at: https://academic.oup.com/biomet/article/105/2/285/4818354",
year = "2018",
month = jun,
day = "1",
doi = "10.1093/biomet/asx078",
language = "English",
volume = "105",
pages = "285--299",
journal = "Biometrika",
issn = "0006-3444",
publisher = "Oxford University Press",
number = "2",

}

RIS

TY - JOUR

T1 - On the asymptotic efficiency of approximate Bayesian computation estimators

AU - Li, Wentao

AU - Fearnhead, Paul

N1 - This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Biometrika following peer review. The definitive publisher-authenticated version Wentao Li, Paul Fearnhead; On the asymptotic efficiency of approximate Bayesian computation estimators, Biometrika, Volume 105, Issue 2, 1 June 2018, Pages 285–299, https://doi.org/10.1093/biomet/asx078 is available online at: https://academic.oup.com/biomet/article/105/2/285/4818354

PY - 2018/6/1

Y1 - 2018/6/1

N2 - Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data is summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a lineartransformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.

AB - Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large-data limit. Our key assumption is that the data is summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is the same dimension as the parameter vector, p; and that any summary statistic of higher dimension can be reduced, through a lineartransformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.

KW - Approximate Bayesian computation

KW - Asymptotics

KW - Dimension reduction

KW - Importance Sampling

KW - Partial Information

KW - Proposal distribution

U2 - 10.1093/biomet/asx078

DO - 10.1093/biomet/asx078

M3 - Journal article

VL - 105

SP - 285

EP - 299

JO - Biometrika

JF - Biometrika

SN - 0006-3444

IS - 2

ER -