Rights statement: Copyright © 2019, INFORMS
Accepted author manuscript, 877 KB, PDF document
Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Gaussian Markov random fields for discrete optimization via simulation
T2 - framework and algorithms
AU - Salemi, Peter L.
AU - Song, Eunhye
AU - Nelson, Barry
AU - Staum, Jeremy
N1 - Copyright © 2019, INFORMS
PY - 2019/2/21
Y1 - 2019/2/21
N2 - We consider optimizing the expected value of some performance measure ofa dynamic stochastic simulation with a statistical guarantee for optimality when the decision variables are discrete, in particular, integer-ordered; the number of feasible solutions is large; and the model execution is too slow to simulate even a substantial fraction of them. Our goal is to create algorithms that stop searching when they can provide inference about the remaining optimality gap similar to the correct-selection guarantee of ranking and selection when it simulates all solutions. Further, our algorithm remains competitive with fixed-budget algorithms that search efficiently but do not provide such inference. To accomplish this we learn and exploit spatial relationships among the decisionvariables and objective function values using a Gaussian Markov random field (GMRF).Gaussian random fields on continuous domains are already used in deterministic and stochastic optimization because they facilitate the computation of measures, such as expected improvement, that balance exploration and exploitation. We show that GMRFs are particularly well suited to the discrete decision–variable problem, from both a modeling and a computational perspective. Specifically, GMRFs permit the definition of a sensible neighborhood structure, and they are defined by their precision matrices, which can be constructed to be sparse. Using this framework, we create both single and multiresolution algorithms, prove the asymptotic convergence of both, and evaluate their finite-timeperformance empirically.
AB - We consider optimizing the expected value of some performance measure ofa dynamic stochastic simulation with a statistical guarantee for optimality when the decision variables are discrete, in particular, integer-ordered; the number of feasible solutions is large; and the model execution is too slow to simulate even a substantial fraction of them. Our goal is to create algorithms that stop searching when they can provide inference about the remaining optimality gap similar to the correct-selection guarantee of ranking and selection when it simulates all solutions. Further, our algorithm remains competitive with fixed-budget algorithms that search efficiently but do not provide such inference. To accomplish this we learn and exploit spatial relationships among the decisionvariables and objective function values using a Gaussian Markov random field (GMRF).Gaussian random fields on continuous domains are already used in deterministic and stochastic optimization because they facilitate the computation of measures, such as expected improvement, that balance exploration and exploitation. We show that GMRFs are particularly well suited to the discrete decision–variable problem, from both a modeling and a computational perspective. Specifically, GMRFs permit the definition of a sensible neighborhood structure, and they are defined by their precision matrices, which can be constructed to be sparse. Using this framework, we create both single and multiresolution algorithms, prove the asymptotic convergence of both, and evaluate their finite-timeperformance empirically.
KW - large-scale discrete optimization via simulation
KW - inferential optimization
KW - Gaussian Markov random fields
U2 - 10.1287/opre.2018.1778
DO - 10.1287/opre.2018.1778
M3 - Journal article
VL - 67
SP - 250
EP - 266
JO - Operations Research
JF - Operations Research
SN - 0030-364X
IS - 1
ER -