Home > Research > Publications & Outputs > Prospects for bandit solutions in sensor manage...

Electronic data

Links

Text available via DOI:

View graph of relations

Prospects for bandit solutions in sensor management

Research output: Contribution to journalJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>2010
<mark>Journal</mark>The Computer Journal
Issue number9
Volume53
Number of pages14
Pages (from-to)1370-1383
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Sensor management in information-rich and dynamic environments can be posed as a sequential action selection problem with side information. To study such problems we employ the dynamic multi-armed bandit with covariates framework. In this generalization of the multi-armed bandit, the expected rewards are time-varying linear functions of the covariate vector. The learning goal is to associate the covariate with the optimal action at each instance, essentially learning to partition the covariate space adaptively. Applications of sensor management are frequently in environments in which the precise nature of the dynamics is unknown. In such settings, the sensor manager tracks the evolving environment by observing only the covariates and the consequences of the selected actions. This creates difficulties not encountered in static problems, and changes the exploitation–exploration dilemma. We study the relationship between the different factors of the problem and provide interesting insights. The impact of the environment dynamics on the action selection problem is influenced by the covariate dimensionality. We present the surprising result that strategies that perform very little or no exploration perform surprisingly well in dynamic environments