Rights statement: https://www.cambridge.org/core/journals/probability-in-the-engineering-and-informational-sciences The final, definitive version of this article has been published in the Journal,Probability in the Engineering and Informational Sciences, 31 (2), pp 239-263 2017, © 2016 Cambridge University Press.
Accepted author manuscript, 458 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - On the identification and mitigation of weaknesses in the Knowledge Gradient policy for multi-armed bandits
AU - Edwards, James
AU - Fearnhead, Paul
AU - Glazebrook, Kevin David
N1 - https://www.cambridge.org/core/journals/probability-in-the-engineering-and-informational-sciences The final, definitive version of this article has been published in the Journal,Probability in the Engineering and Informational Sciences, 31 (2), pp 239-263 2017, © 2016 Cambridge University Press.
PY - 2017/4
Y1 - 2017/4
N2 - The Knowledge Gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not make dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.
AB - The Knowledge Gradient (KG) policy was originally proposed for online ranking and selection problems but has recently been adapted for use in online decision making in general and multi-armed bandit problems (MABs) in particular. We study its use in a class of exponential family MABs and identify weaknesses, including a propensity to take actions which are dominated with respect to both exploitation and exploration. We propose variants of KG which avoid such errors. These new policies include an index heuristic which deploys a KG approach to develop an approximation to the Gittins index. A numerical study shows this policy to perform well over a range of MABs including those for which index policies are not optimal. While KG does not make dominated actions when bandits are Gaussian, it fails to be index consistent and appears not to enjoy a performance advantage over competitor policies when arms are correlated to compensate for its greater computational demands.
KW - Multi-Armed Bandit problem
KW - stochastic dynamic programming
U2 - 10.1017/S0269964816000279
DO - 10.1017/S0269964816000279
M3 - Journal article
VL - 31
SP - 239
EP - 263
JO - Probability in the Engineering and Informational Sciences
JF - Probability in the Engineering and Informational Sciences
SN - 0269-9648
IS - 2
ER -