Home > Research > Publications & Outputs > Preferential Subsampling for Stochastic Gradien...

Links

View graph of relations

Preferential Subsampling for Stochastic Gradient Langevin Dynamics

Research output: Contribution to Journal/MagazineConference articlepeer-review

Published
<mark>Journal publication date</mark>27/04/2023
<mark>Journal</mark>Proceedings of Machine Learning Research
Volume206
Number of pages20
Pages (from-to)8837-8856
Publication StatusPublished
<mark>Original language</mark>English
Event26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 - Valencia, Spain
Duration: 25/04/202327/04/2023

Conference

Conference26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023
Country/TerritorySpain
CityValencia
Period25/04/2327/04/23

Abstract

Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.