Home > Research > Publications & Outputs > MUMBO

Electronic data

  • 2006.12093v1

    Accepted author manuscript, 1.01 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Keywords

View graph of relations

MUMBO: MUlti-task Max-value Bayesian Optimization

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date14/09/2020
Host publicationachine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2020, Proceedings
PublisherSpringer
Pages447-462
Number of pages16
ISBN (print)9783030676636
<mark>Original language</mark>English

Abstract

We propose MUMBO, the first high-performing yet computationally efficient acquisition function for multi-task Bayesian optimization. Here, the challenge is to perform efficient optimization by evaluating low-cost functions somehow related to our true target function. This is a broad class of problems including the popular task of multi-fidelity optimization. However, while information-theoretic acquisition functions are known to provide state-of-the-art Bayesian optimization, existing implementations for multi-task scenarios have prohibitive computational requirements. Previous acquisition functions have therefore been suitable only for problems with both low-dimensional parameter spaces and function query costs sufficiently large to overshadow very significant optimization overheads. In this work, we derive a novel multi-task version of entropy search, delivering robust performance with low computational overheads across classic optimization challenges and multi-task hyper-parameter tuning. MUMBO is scalable and efficient, allowing multi-task Bayesian optimization to be deployed in problems with rich parameter and fidelity spaces.