Home > Research > Publications & Outputs > Preferential Subsampling for Stochastic Gradien...

Links

View graph of relations

Preferential Subsampling for Stochastic Gradient Langevin Dynamics

Research output: Contribution to Journal/MagazineConference articlepeer-review

Published

Standard

Preferential Subsampling for Stochastic Gradient Langevin Dynamics. / Putcha, Srshti; Nemeth, Christopher; Fearnhead, Paul.
In: Proceedings of Machine Learning Research, Vol. 206, 27.04.2023, p. 8837-8856.

Research output: Contribution to Journal/MagazineConference articlepeer-review

Harvard

APA

Vancouver

Putcha S, Nemeth C, Fearnhead P. Preferential Subsampling for Stochastic Gradient Langevin Dynamics. Proceedings of Machine Learning Research. 2023 Apr 27;206:8837-8856.

Author

Putcha, Srshti ; Nemeth, Christopher ; Fearnhead, Paul. / Preferential Subsampling for Stochastic Gradient Langevin Dynamics. In: Proceedings of Machine Learning Research. 2023 ; Vol. 206. pp. 8837-8856.

Bibtex

@article{81323616034143089d3b468bf3502874,
title = "Preferential Subsampling for Stochastic Gradient Langevin Dynamics",
abstract = "Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.",
author = "Srshti Putcha and Christopher Nemeth and Paul Fearnhead",
year = "2023",
month = apr,
day = "27",
language = "English",
volume = "206",
pages = "8837--8856",
journal = "Proceedings of Machine Learning Research",
issn = "2640-3498",
note = "26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 ; Conference date: 25-04-2023 Through 27-04-2023",

}

RIS

TY - JOUR

T1 - Preferential Subsampling for Stochastic Gradient Langevin Dynamics

AU - Putcha, Srshti

AU - Nemeth, Christopher

AU - Fearnhead, Paul

PY - 2023/4/27

Y1 - 2023/4/27

N2 - Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.

AB - Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, nonuniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.

M3 - Conference article

AN - SCOPUS:85165166445

VL - 206

SP - 8837

EP - 8856

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 2640-3498

T2 - 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023

Y2 - 25 April 2023 through 27 April 2023

ER -