Final published version, 961 KB, PDF document
Final published version
Licence: CC BY
Final published version
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper
}
TY - CONF
T1 - BOSH
T2 - Workshop on Real World Experiment Design and Active Learning at ICML 2020
AU - Moss, Henry B.
AU - Leslie, David S.
AU - Rayson, Paul
PY - 2020/7/18
Y1 - 2020/7/18
N2 - Deployments of Bayesian Optimization (BO) for functions with stochastic evaluations, such as parameter tuning via cross validation and simulation optimization, typically optimize an average of a fixed set of noisy realizations of the objective function. However, disregarding the true objective function in this manner finds a high-precision optimum of the wrong function. To solve this problem, we propose Bayesian Optimization by Sampling Hierarchically (BOSH), a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations as the optimization progresses. We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper-parameter tuning tasks.
AB - Deployments of Bayesian Optimization (BO) for functions with stochastic evaluations, such as parameter tuning via cross validation and simulation optimization, typically optimize an average of a fixed set of noisy realizations of the objective function. However, disregarding the true objective function in this manner finds a high-precision optimum of the wrong function. To solve this problem, we propose Bayesian Optimization by Sampling Hierarchically (BOSH), a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations as the optimization progresses. We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper-parameter tuning tasks.
KW - cs.LG
KW - stat.ML
M3 - Conference paper
Y2 - 13 July 2020 through 18 July 2020
ER -