Home > Research > Publications & Outputs > On-line adaptation of exploration in the one-ar...

Links

Text available via DOI:

View graph of relations

On-line adaptation of exploration in the one-armed bandit with covariates problem

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date2010
Host publicationProceedings - 9th International Conference on Machine Learning and Applications, ICMLA 2010
PublisherIEEE
Pages459-464
Number of pages6
ISBN (print)9780769543000
<mark>Original language</mark>English

Abstract

Many sequential decision making problems require an agent to balance exploration and exploitation to maximise long-term reward. Existing policies that address this tradeoff typically have parameters that are set a priori to control the amount of exploration. In finite-time problems, the optimal values of these parameters are highly dependent on the problem faced. In this paper, we propose adapting the amount of exploration performed on-line, as information is gathered by the agent. To this end we introduce a novel algorithm, e-ADAPT, which has no free parameters. The algorithm adapts as it plays and sequentially chooses whether to explore or exploit, driven by the amount of uncertainty in the system. We provide simulation results for the one armed bandit with covariates problem, which demonstrate the effectiveness of e-ADAPT to correctly control the amount of exploration in finite-time problems and yield rewards that are close to optimally tuned off-line policies. Furthermore, we show that e-ADAPT is robust to a high-dimensional covariate, as well as misspecified models. Finally, we describe how our methods could be extended to other sequential decision making problems, such as dynamic bandit problems with changing reward structures.