Home > Research > Publications & Outputs > Rapid Discrete Optimization via Simulation with...

Electronic data

  • SemelhagoNelsonSongWaechterIJoCAAM

    Accepted author manuscript, 924 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields. / Semelhago, Mark; Nelson, Barry; Song, Eunhye et al.
In: INFORMS Journal on Computing, Vol. 33, No. 3, 30.06.2021, p. 837-1257.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Semelhago, M, Nelson, B, Song, E & Wächter, A 2021, 'Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields', INFORMS Journal on Computing, vol. 33, no. 3, pp. 837-1257. https://doi.org/10.1287/ijoc.2020.0971

APA

Semelhago, M., Nelson, B., Song, E., & Wächter, A. (2021). Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields. INFORMS Journal on Computing, 33(3), 837-1257. https://doi.org/10.1287/ijoc.2020.0971

Vancouver

Semelhago M, Nelson B, Song E, Wächter A. Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields. INFORMS Journal on Computing. 2021 Jun 30;33(3):837-1257. Epub 2020 Oct 14. doi: 10.1287/ijoc.2020.0971

Author

Semelhago, Mark ; Nelson, Barry ; Song, Eunhye et al. / Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields. In: INFORMS Journal on Computing. 2021 ; Vol. 33, No. 3. pp. 837-1257.

Bibtex

@article{74e089b8d297424593124fca9a20402a,
title = "Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields",
abstract = "Inference-based optimization via simulation, which substitutes Gaussianprocess (GP) learning for the structural properties exploited in mathematical programming,is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. Theresult is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study.Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy.Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.",
keywords = "design of experiments, efficiency, statistical analysis",
author = "Mark Semelhago and Barry Nelson and Eunhye Song and Andreas W{\"a}chter",
year = "2021",
month = jun,
day = "30",
doi = "10.1287/ijoc.2020.0971",
language = "English",
volume = "33",
pages = "837--1257",
journal = "INFORMS Journal on Computing",
issn = "1091-9856",
publisher = "INFORMS Inst.for Operations Res.and the Management Sciences",
number = "3",

}

RIS

TY - JOUR

T1 - Rapid Discrete Optimization via Simulation with Gaussian Markov Random Fields

AU - Semelhago, Mark

AU - Nelson, Barry

AU - Song, Eunhye

AU - Wächter, Andreas

PY - 2021/6/30

Y1 - 2021/6/30

N2 - Inference-based optimization via simulation, which substitutes Gaussianprocess (GP) learning for the structural properties exploited in mathematical programming,is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. Theresult is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study.Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy.Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.

AB - Inference-based optimization via simulation, which substitutes Gaussianprocess (GP) learning for the structural properties exploited in mathematical programming,is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. Theresult is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study.Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy.Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.

KW - design of experiments

KW - efficiency

KW - statistical analysis

U2 - 10.1287/ijoc.2020.0971

DO - 10.1287/ijoc.2020.0971

M3 - Journal article

VL - 33

SP - 837

EP - 1257

JO - INFORMS Journal on Computing

JF - INFORMS Journal on Computing

SN - 1091-9856

IS - 3

ER -