Standard
Scalable Thompson sampling using sparse Gaussian process models. / Vakili, Sattar
; Moss, Henry; Artemev, Artem et al.
Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. ed. / Marc'Aurelio Ranzato; Alina Beygelzimer; Yann Dauphin; Percy S. Liang; Jenn Wortman Vaughan. Vol. 34 2021. p. 5631-5643 (Advances in Neural Information Processing Systems).
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Harvard
Vakili, S
, Moss, H, Artemev, A, Dutordoir, V & Picheny, V 2021,
Scalable Thompson sampling using sparse Gaussian process models. in MA Ranzato, A Beygelzimer, Y Dauphin, PS Liang & J Wortman Vaughan (eds),
Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. vol. 34, Advances in Neural Information Processing Systems, pp. 5631-5643. <
https://proceedings.neurips.cc/paper_files/paper/2021/hash/2c7f9ccb5a39073e24babc3a4cb45e60-Abstract.html>
APA
Vakili, S.
, Moss, H., Artemev, A., Dutordoir, V., & Picheny, V. (2021).
Scalable Thompson sampling using sparse Gaussian process models. In MA. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, & J. Wortman Vaughan (Eds.),
Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021 (Vol. 34, pp. 5631-5643). (Advances in Neural Information Processing Systems).
https://proceedings.neurips.cc/paper_files/paper/2021/hash/2c7f9ccb5a39073e24babc3a4cb45e60-Abstract.html
Vancouver
Vakili S
, Moss H, Artemev A, Dutordoir V, Picheny V.
Scalable Thompson sampling using sparse Gaussian process models. In Ranzato MA, Beygelzimer A, Dauphin Y, Liang PS, Wortman Vaughan J, editors, Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. Vol. 34. 2021. p. 5631-5643. (Advances in Neural Information Processing Systems).
Author
Vakili, Sattar
; Moss, Henry ; Artemev, Artem et al. /
Scalable Thompson sampling using sparse Gaussian process models. Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021. editor / Marc'Aurelio Ranzato ; Alina Beygelzimer ; Yann Dauphin ; Percy S. Liang ; Jenn Wortman Vaughan. Vol. 34 2021. pp. 5631-5643 (Advances in Neural Information Processing Systems).
Bibtex
@inproceedings{3e4df96c008f499dbf98ed1f28c35495,
title = "Scalable Thompson sampling using sparse Gaussian process models",
abstract = "Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.",
author = "Sattar Vakili and Henry Moss and Artem Artemev and Vincent Dutordoir and Victor Picheny",
year = "2021",
month = dec,
day = "6",
language = "English",
volume = "34",
series = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",
pages = "5631--5643",
editor = "Marc'Aurelio Ranzato and Alina Beygelzimer and Yann Dauphin and Liang, {Percy S.} and {Wortman Vaughan}, Jenn",
booktitle = "Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021",
}
RIS
TY - GEN
T1 - Scalable Thompson sampling using sparse Gaussian process models
AU - Vakili, Sattar
AU - Moss, Henry
AU - Artemev, Artem
AU - Dutordoir, Vincent
AU - Picheny, Victor
PY - 2021/12/6
Y1 - 2021/12/6
N2 - Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.
AB - Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.
M3 - Conference contribution/Paper
VL - 34
T3 - Advances in Neural Information Processing Systems
SP - 5631
EP - 5643
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
ER -