Home > Research > Publications & Outputs > Learning in unknown reward games
View graph of relations

Learning in unknown reward games: application to sensor networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>1/06/2014
<mark>Journal</mark>The Computer Journal
Issue number6
Volume57
Number of pages18
Pages (from-to)875-892
Publication StatusPublished
<mark>Original language</mark>English

Abstract

This paper demonstrates a decentralized method for optimization using game-theoretic multi-agent techniques, applied to a sensor network management problem. Our first major contribution is to show how the marginal contribution utility design is used to construct an unknown-reward potential game formulation of the problem. This formulation exploits the sparse structure of sensor network problems, and allows us to apply a bound to the price of anarchy of the Nash equilibria of the induced game. Furthermore, since the game is a potential game, solutions can be found using multi-agent learning techniques. The techniques we derive use Q-learning to estimate an agent's rewards, while an action adaptation process responds to an agent's opponents’ behaviour. However, there are many different algorithmic configurations that could be used to solve these games. Thus, our second major contribution is an extensive evaluation of several action adaptation processes. Specifically, we compare six algorithms across a variety of parameter settings to ascertain the quality of the solutions they produce, their speed of convergence and their robustness to pre-specified parameter choices. Our results show that they each perform similarly across a wide range of parameters. There is, however, a significant effect from moving to a learning policy with sampling probabilities that go to zero too quickly for rewards to be accurately estimated.