Home > Research > Publications & Outputs > Bandits with Delayed, Aggregated Anonymous Feed...

Electronic data

  • 1709.06853

    Accepted author manuscript, 4.11 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License


View graph of relations

Bandits with Delayed, Aggregated Anonymous Feedback

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Publication date15/07/2018
Host publicationProceedings of the International Conference on Machine Learning, 10-15 July 2018, Stockholmsmässan, Stockholm Sweden
EditorsJennifer Dy
Number of pages19
<mark>Original language</mark>English

Publication series

NameProceedings of Machine Learning Research
ISSN (Print)1938-7228


We study a variant of the stochastic K-armed bandit problem, which we call “bandits with delayed,
aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward
is generated, however it is not immediately observed.
Instead, at the end of each round the player observes only the sum of a number of previously
generated rewards which happen to arrive in the given round. The rewards are stochastically
delayed and due to the aggregated nature of the observations, the information of which arm
led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term for unbounded delays.