Home > Research > Publications & Outputs > Solving Robust Markov Decision Processes

Electronic data

  • 2412.10185v1

    Accepted author manuscript, 700 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Text available via DOI:

View graph of relations

Solving Robust Markov Decision Processes: Generic, Reliable, Efficient

Research output: Contribution to Journal/MagazineConference articlepeer-review

Published

Standard

Solving Robust Markov Decision Processes: Generic, Reliable, Efficient. / Meggendorfer, Tobias; Weininger, Maximilian; Wienhöft, Patrick.
In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, No. 25, 11.04.2025, p. 26631-26641.

Research output: Contribution to Journal/MagazineConference articlepeer-review

Harvard

Meggendorfer, T, Weininger, M & Wienhöft, P 2025, 'Solving Robust Markov Decision Processes: Generic, Reliable, Efficient', Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 25, pp. 26631-26641. https://doi.org/10.48550/arXiv.2412.10185, https://doi.org/10.1609/aaai.v39i25.34865

APA

Meggendorfer, T., Weininger, M., & Wienhöft, P. (2025). Solving Robust Markov Decision Processes: Generic, Reliable, Efficient. Proceedings of the AAAI Conference on Artificial Intelligence, 39(25), 26631-26641. https://doi.org/10.48550/arXiv.2412.10185, https://doi.org/10.1609/aaai.v39i25.34865

Vancouver

Meggendorfer T, Weininger M, Wienhöft P. Solving Robust Markov Decision Processes: Generic, Reliable, Efficient. Proceedings of the AAAI Conference on Artificial Intelligence. 2025 Apr 11;39(25):26631-26641. doi: 10.48550/arXiv.2412.10185, 10.1609/aaai.v39i25.34865

Author

Meggendorfer, Tobias ; Weininger, Maximilian ; Wienhöft, Patrick. / Solving Robust Markov Decision Processes : Generic, Reliable, Efficient. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2025 ; Vol. 39, No. 25. pp. 26631-26641.

Bibtex

@article{1f602b9f11e648e985e6d046140484b8,
title = "Solving Robust Markov Decision Processes: Generic, Reliable, Efficient",
abstract = "Markov decision processes (MDP) are a well-established model for sequential decision-making in the presence of probabilities. In robust MDP (RMDP), every action is associated with an uncertainty set of probability distributions, modelling that transition probabilities are not known precisely. Based on the known theoretical connection to stochastic games, we provide a framework for solving RMDPs that is generic, reliable, and efficient. It is generic both with respect to the model, allowing for a wide range of uncertainty sets, including but not limited to intervals, L1- or L2-balls, and polytopes; and with respect to the objective, including long-run average reward, undiscounted total reward, and stochastic shortest path. It is reliable, as our approach not only converges in the limit, but provides precision guarantees at any time during the computation. It is efficient because - in contrast to state-of-the-art approaches - it avoids explicitly constructing the underlying stochastic game. Consequently, our prototype implementation outperforms existing tools by several orders of magnitude and can solve RMDPs with a million states in under a minute.",
author = "Tobias Meggendorfer and Maximilian Weininger and Patrick Wienh{\"o}ft",
year = "2025",
month = apr,
day = "11",
doi = "10.48550/arXiv.2412.10185",
language = "English",
volume = "39",
pages = "26631--26641",
journal = "Proceedings of the AAAI Conference on Artificial Intelligence",
issn = "2159-5399",
publisher = "Association for the Advancement of Artificial Intelligence",
number = "25",
note = "39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 ; Conference date: 25-02-2025 Through 04-03-2025",

}

RIS

TY - JOUR

T1 - Solving Robust Markov Decision Processes

T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025

AU - Meggendorfer, Tobias

AU - Weininger, Maximilian

AU - Wienhöft, Patrick

PY - 2025/4/11

Y1 - 2025/4/11

N2 - Markov decision processes (MDP) are a well-established model for sequential decision-making in the presence of probabilities. In robust MDP (RMDP), every action is associated with an uncertainty set of probability distributions, modelling that transition probabilities are not known precisely. Based on the known theoretical connection to stochastic games, we provide a framework for solving RMDPs that is generic, reliable, and efficient. It is generic both with respect to the model, allowing for a wide range of uncertainty sets, including but not limited to intervals, L1- or L2-balls, and polytopes; and with respect to the objective, including long-run average reward, undiscounted total reward, and stochastic shortest path. It is reliable, as our approach not only converges in the limit, but provides precision guarantees at any time during the computation. It is efficient because - in contrast to state-of-the-art approaches - it avoids explicitly constructing the underlying stochastic game. Consequently, our prototype implementation outperforms existing tools by several orders of magnitude and can solve RMDPs with a million states in under a minute.

AB - Markov decision processes (MDP) are a well-established model for sequential decision-making in the presence of probabilities. In robust MDP (RMDP), every action is associated with an uncertainty set of probability distributions, modelling that transition probabilities are not known precisely. Based on the known theoretical connection to stochastic games, we provide a framework for solving RMDPs that is generic, reliable, and efficient. It is generic both with respect to the model, allowing for a wide range of uncertainty sets, including but not limited to intervals, L1- or L2-balls, and polytopes; and with respect to the objective, including long-run average reward, undiscounted total reward, and stochastic shortest path. It is reliable, as our approach not only converges in the limit, but provides precision guarantees at any time during the computation. It is efficient because - in contrast to state-of-the-art approaches - it avoids explicitly constructing the underlying stochastic game. Consequently, our prototype implementation outperforms existing tools by several orders of magnitude and can solve RMDPs with a million states in under a minute.

U2 - 10.48550/arXiv.2412.10185

DO - 10.48550/arXiv.2412.10185

M3 - Conference article

AN - SCOPUS:105003911147

VL - 39

SP - 26631

EP - 26641

JO - Proceedings of the AAAI Conference on Artificial Intelligence

JF - Proceedings of the AAAI Conference on Artificial Intelligence

SN - 2159-5399

IS - 25

Y2 - 25 February 2025 through 4 March 2025

ER -