Home > Research > Publications & Outputs > On Thompson Sampling for Smoother-than-Lipschit...

Electronic data

  • TSSmootherThanLipschitz

    Accepted author manuscript, 615 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

  • 2001.02323v1

    Final published version, 712 KB, PDF document

Links

Keywords

View graph of relations

On Thompson Sampling for Smoother-than-Lipschitz Bandits

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

On Thompson Sampling for Smoother-than-Lipschitz Bandits. / Grant, James A.; Leslie, David S.

23rd International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, 2020. p. 2612-2622 (Proceedings of Machine Learning Research; Vol. 108).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Grant, JA & Leslie, DS 2020, On Thompson Sampling for Smoother-than-Lipschitz Bandits. in 23rd International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 108, Proceedings of Machine Learning Research, pp. 2612-2622. <https://arxiv.org/abs/2001.02323>

APA

Grant, J. A., & Leslie, D. S. (2020). On Thompson Sampling for Smoother-than-Lipschitz Bandits. In 23rd International Conference on Artificial Intelligence and Statistics (pp. 2612-2622). (Proceedings of Machine Learning Research; Vol. 108). Proceedings of Machine Learning Research. https://arxiv.org/abs/2001.02323

Vancouver

Grant JA, Leslie DS. On Thompson Sampling for Smoother-than-Lipschitz Bandits. In 23rd International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research. 2020. p. 2612-2622. (Proceedings of Machine Learning Research).

Author

Grant, James A. ; Leslie, David S. / On Thompson Sampling for Smoother-than-Lipschitz Bandits. 23rd International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, 2020. pp. 2612-2622 (Proceedings of Machine Learning Research).

Bibtex

@inproceedings{9c45737e08434c8e95c1f5fb7f1683c0,
title = "On Thompson Sampling for Smoother-than-Lipschitz Bandits",
abstract = "Thompson Sampling is a well established approach to bandit and reinforcement learning problems. However its use in continuum armed bandit problems has received relatively little attention. We provide the first bounds on the regret of Thompson Sampling for continuum armed bandits under weak conditions on the function class containing the true function and sub-exponential observation noise. Our bounds are realised by analysis of the eluder dimension, a recently proposed measure of the complexity of a function class, which has been demonstrated to be useful in bounding the Bayesian regret of Thompson Sampling for simpler bandit problems under sub-Gaussian observation noise. We derive a new bound on the eluder dimension for classes of functions with Lipschitz derivatives, and generalise previous analyses in multiple regards. ",
keywords = "cs.LG, stat.ML",
author = "Grant, {James A.} and Leslie, {David S.}",
year = "2020",
month = aug,
day = "26",
language = "English",
series = "Proceedings of Machine Learning Research",
publisher = "Proceedings of Machine Learning Research",
pages = "2612--2622",
booktitle = "23rd International Conference on Artificial Intelligence and Statistics",

}

RIS

TY - GEN

T1 - On Thompson Sampling for Smoother-than-Lipschitz Bandits

AU - Grant, James A.

AU - Leslie, David S.

PY - 2020/8/26

Y1 - 2020/8/26

N2 - Thompson Sampling is a well established approach to bandit and reinforcement learning problems. However its use in continuum armed bandit problems has received relatively little attention. We provide the first bounds on the regret of Thompson Sampling for continuum armed bandits under weak conditions on the function class containing the true function and sub-exponential observation noise. Our bounds are realised by analysis of the eluder dimension, a recently proposed measure of the complexity of a function class, which has been demonstrated to be useful in bounding the Bayesian regret of Thompson Sampling for simpler bandit problems under sub-Gaussian observation noise. We derive a new bound on the eluder dimension for classes of functions with Lipschitz derivatives, and generalise previous analyses in multiple regards.

AB - Thompson Sampling is a well established approach to bandit and reinforcement learning problems. However its use in continuum armed bandit problems has received relatively little attention. We provide the first bounds on the regret of Thompson Sampling for continuum armed bandits under weak conditions on the function class containing the true function and sub-exponential observation noise. Our bounds are realised by analysis of the eluder dimension, a recently proposed measure of the complexity of a function class, which has been demonstrated to be useful in bounding the Bayesian regret of Thompson Sampling for simpler bandit problems under sub-Gaussian observation noise. We derive a new bound on the eluder dimension for classes of functions with Lipschitz derivatives, and generalise previous analyses in multiple regards.

KW - cs.LG

KW - stat.ML

M3 - Conference contribution/Paper

T3 - Proceedings of Machine Learning Research

SP - 2612

EP - 2622

BT - 23rd International Conference on Artificial Intelligence and Statistics

PB - Proceedings of Machine Learning Research

ER -