Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Conference article › peer-review
Research output: Contribution to Journal/Magazine › Conference article › peer-review
}
TY - JOUR
T1 - Preferential Subsampling for Stochastic Gradient Langevin Dynamics
AU - Putcha, Srshti
AU - Nemeth, Christopher
AU - Fearnhead, Paul
PY - 2023/4/27
Y1 - 2023/4/27
N2 - Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.
AB - Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.
M3 - Conference article
AN - SCOPUS:85165166445
VL - 206
SP - 8837
EP - 8856
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 2640-3498
T2 - 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023
Y2 - 25 April 2023 through 27 April 2023
ER -