Accepted author manuscript, 700 KB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Conference article › peer-review
<mark>Journal publication date</mark> | 11/04/2025 |
---|---|
<mark>Journal</mark> | Proceedings of the AAAI Conference on Artificial Intelligence |
Issue number | 25 |
Volume | 39 |
Number of pages | 11 |
Pages (from-to) | 26631-26641 |
Publication Status | Published |
<mark>Original language</mark> | English |
Event | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States Duration: 25/02/2025 → 4/03/2025 |
Conference | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 |
---|---|
Country/Territory | United States |
City | Philadelphia |
Period | 25/02/25 → 4/03/25 |
Markov decision processes (MDP) are a well-established model for sequential decision-making in the presence of probabilities. In robust MDP (RMDP), every action is associated with an uncertainty set of probability distributions, modelling that transition probabilities are not known precisely. Based on the known theoretical connection to stochastic games, we provide a framework for solving RMDPs that is generic, reliable, and efficient. It is generic both with respect to the model, allowing for a wide range of uncertainty sets, including but not limited to intervals, L1- or L2-balls, and polytopes; and with respect to the objective, including long-run average reward, undiscounted total reward, and stochastic shortest path. It is reliable, as our approach not only converges in the limit, but provides precision guarantees at any time during the computation. It is efficient because - in contrast to state-of-the-art approaches - it avoids explicitly constructing the underlying stochastic game. Consequently, our prototype implementation outperforms existing tools by several orders of magnitude and can solve RMDPs with a million states in under a minute.