Home > Research > Publications & Outputs > Regret bounds for Gaussian process bandit problems
View graph of relations

Regret bounds for Gaussian process bandit problems

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date2010
Host publicationArtificial Intelligence and Statistics (AISTATS)
Pages273-280
Number of pages8
<mark>Original language</mark>English

Abstract

Bandit algorithms are concerned with trading exploration with exploitation where a number of options are available but we can only learn their quality by experimenting with them. We consider the scenario in which the reward distribution for arms is modelled by a Gaussian process and there is no noise in the observed reward. Our main result is to bound the regret experienced by algorithms relative to the a posteriori optimal strategy of playing the best arm throughout based on benign assumptions about the covariance function defining the Gaussian process. We further complement these upper bounds with corresponding lower bounds for particular covariance functions demonstrating that in general
there is at most a logarithmic looseness in our upper bounds.