Home > Research > Publications & Outputs > Preferential Subsampling for Stochastic Gradien...

Electronic data

  • 2210.16189v2

    Accepted author manuscript, 2.08 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Preferential Subsampling for Stochastic Gradient Langevin Dynamics

Research output: Working paperPreprint

Forthcoming
Publication date23/02/2023
PublisherProceedings of Machine Learning Research
<mark>Original language</mark>Undefined/Unknown

Publication series

NameArtificial Intelligence and Statistics
PublisherProceedings of Machine Learning Research
Volume206

Abstract

Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, non-uniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.

Bibliographic note

22 pages, 5 figures. To appear in the proceedings of AISTATS 2023