Home > Research > Publications & Outputs > Coin Sampling

Electronic data

Links

View graph of relations

Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates. / Sharrock, Louis; Nemeth, Christopher.
In: Proceedings of Machine Learning Research, Vol. 202, 23.07.2023, p. 30850-30882.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Sharrock L, Nemeth C. Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates. Proceedings of Machine Learning Research. 2023 Jul 23;202:30850-30882.

Author

Sharrock, Louis ; Nemeth, Christopher. / Coin Sampling : Gradient-Based Bayesian Inference without Learning Rates. In: Proceedings of Machine Learning Research. 2023 ; Vol. 202. pp. 30850-30882.

Bibtex

@article{151d84055a1d413d99e6422af19822eb,
title = "Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates",
abstract = "In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.",
author = "Louis Sharrock and Christopher Nemeth",
note = "In: Proceedings of the 40th International Conference on Machine Learning (ICML), Hawaii, USA. ",
year = "2023",
month = jul,
day = "23",
language = "English",
volume = "202",
pages = "30850--30882",
journal = "Proceedings of Machine Learning Research",
issn = "1938-7228",
publisher = "ML Research Press",

}

RIS

TY - JOUR

T1 - Coin Sampling

T2 - Gradient-Based Bayesian Inference without Learning Rates

AU - Sharrock, Louis

AU - Nemeth, Christopher

N1 - In: Proceedings of the 40th International Conference on Machine Learning (ICML), Hawaii, USA.

PY - 2023/7/23

Y1 - 2023/7/23

N2 - In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.

AB - In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.

M3 - Journal article

VL - 202

SP - 30850

EP - 30882

JO - Proceedings of Machine Learning Research

JF - Proceedings of Machine Learning Research

SN - 1938-7228

ER -