Home > Research > Publications & Outputs > Coin Sampling

Electronic data

Links

View graph of relations

Coin Sampling: Gradient-Based Bayesian Inference without Learning Rates

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>23/07/2023
<mark>Journal</mark>Proceedings of Machine Learning Research
Volume202
Number of pages33
Pages (from-to)30850-30882
Publication StatusPublished
<mark>Original language</mark>English

Abstract

In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference. Unfortunately, the properties of such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by the practitioner in order to ensure convergence to the target measure at a suitable rate. In this paper, we introduce a suite of new particle-based methods for scalable Bayesian inference based on coin betting, which are entirely learning-rate free. We illustrate the performance of our approach on a range of numerical examples, including several high-dimensional models and datasets, demonstrating comparable performance to other ParVI algorithms with no need to tune a learning rate.

Bibliographic note

In: Proceedings of the 40th International Conference on Machine Learning (ICML), Hawaii, USA.