Skip to main content

2017 | OriginalPaper | Buchkapitel

Value Iteration for Long-Run Average Reward in Markov Decision Processes

verfasst von : Pranav Ashok, Krishnendu Chatterjee, Przemysław Daca, Jan Křetínský, Tobias Meggendorfer

Erschienen in: Computer Aided Verification

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
This standard definition assumes that actions are unique for each state, i.e. \(\mathsf {Av}(s) \mathbin {\cap }\mathsf {Av}(s') = \emptyset \) for \(s \ne s'\). The usual procedure of achieving this in general is to replace \( Act \) by \(S\times Act \) and adapting \(\mathsf {Av}\), \(\varDelta \), and \(r\) appropriately.
 
2
PRISM contains several other methods to solve reachability, which all are imprecise and behaved comparably in our tests.
 
3
MultiGain also supports usage of the LP solver lp_solve 5.5 bundled with PRISM, which consistently performed worse than the Gurobi backend.
 
Literatur
[ACD+17]
Zurück zum Zitat Ashok, P., Chatterjee, K., Daca, P., Křetínský, J., Meggendorfer, T.: Value iteration for long-run average reward in Markov decision processes. Technical report arXiv:1705.02326, arXiv.org (2017) Ashok, P., Chatterjee, K., Daca, P., Křetínský, J., Meggendorfer, T.: Value iteration for long-run average reward in Markov decision processes. Technical report arXiv:​1705.​02326, arXiv.​org (2017)
[BCC+14]
Zurück zum Zitat Brázdil, T., Chatterjee, K., Chmelík, M., Forejt, V., Křetínský, J., Kwiatkowska, M., Parker, D., Ujma, M.: Verification of Markov decision processes using learning algorithms. In: Cassez, F., Raskin, J.-F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 98–114. Springer, Cham (2014). doi:10.1007/978-3-319-11936-6_8 Brázdil, T., Chatterjee, K., Chmelík, M., Forejt, V., Křetínský, J., Kwiatkowska, M., Parker, D., Ujma, M.: Verification of Markov decision processes using learning algorithms. In: Cassez, F., Raskin, J.-F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 98–114. Springer, Cham (2014). doi:10.​1007/​978-3-319-11936-6_​8
[BCFK13]
Zurück zum Zitat Brázdil, T., Chatterjee, K., Forejt, V., Kucera, A.: Trading performance for stability in Markov decision processes. In: LICS, pp. 331–340 (2013) Brázdil, T., Chatterjee, K., Forejt, V., Kucera, A.: Trading performance for stability in Markov decision processes. In: LICS, pp. 331–340 (2013)
[BCFK15]
Zurück zum Zitat Brázdil, T., Chatterjee, K., Forejt, V., Kučera, A.: MultiGain: a controller synthesis tool for MDPs with multiple mean-payoff objectives. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 181–187. Springer, Heidelberg (2015). doi:10.1007/978-3-662-46681-0_12 Brázdil, T., Chatterjee, K., Forejt, V., Kučera, A.: MultiGain: a controller synthesis tool for MDPs with multiple mean-payoff objectives. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 181–187. Springer, Heidelberg (2015). doi:10.​1007/​978-3-662-46681-0_​12
[BK08]
Zurück zum Zitat Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)MATH Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)MATH
[CH11]
Zurück zum Zitat Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: SODA, pp. 1318–1336. SIAM (2011) Chatterjee, K., Henzinger, M.: Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: SODA, pp. 1318–1336. SIAM (2011)
[CH12]
Zurück zum Zitat Chatterjee, K., Henzinger, M.: An O(\({n}^{2}\)) time algorithm for alternating büchi games. In: SODA, pp. 1386–1399. SIAM (2012) Chatterjee, K., Henzinger, M.: An O(\({n}^{2}\)) time algorithm for alternating büchi games. In: SODA, pp. 1386–1399. SIAM (2012)
[CH14]
Zurück zum Zitat Chatterjee, K., Henzinger, M.: Efficient and dynamic algorithms for alternating büchi games and maximal end-component decomposition. J. ACM 61(3), 15:1–15:40 (2014)CrossRefMATH Chatterjee, K., Henzinger, M.: Efficient and dynamic algorithms for alternating büchi games and maximal end-component decomposition. J. ACM 61(3), 15:1–15:40 (2014)CrossRefMATH
[Cha07]
[CI14]
Zurück zum Zitat Chatterjee, K., Ibsen-Jensen, R.: The complexity of ergodic mean-payoff games. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds.) ICALP 2014. LNCS, vol. 8573, pp. 122–133. Springer, Heidelberg (2014). doi:10.1007/978-3-662-43951-7_11 Chatterjee, K., Ibsen-Jensen, R.: The complexity of ergodic mean-payoff games. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds.) ICALP 2014. LNCS, vol. 8573, pp. 122–133. Springer, Heidelberg (2014). doi:10.​1007/​978-3-662-43951-7_​11
[CKK15]
Zurück zum Zitat Chatterjee, K., Komárková, Z., Křetínský, J.: Unifying two views on multiple mean-payoff objectives in Markov decision processes. In: LICS, pp. 244–256 (2015) Chatterjee, K., Komárková, Z., Křetínský, J.: Unifying two views on multiple mean-payoff objectives in Markov decision processes. In: LICS, pp. 244–256 (2015)
[CL13]
Zurück zum Zitat Chatterjee, K., Łącki, J.: Faster algorithms for Markov decision processes with low treewidth. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 543–558. Springer, Heidelberg (2013). doi:10.1007/978-3-642-39799-8_36 CrossRef Chatterjee, K., Łącki, J.: Faster algorithms for Markov decision processes with low treewidth. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 543–558. Springer, Heidelberg (2013). doi:10.​1007/​978-3-642-39799-8_​36 CrossRef
[CY95]
[dA97]
Zurück zum Zitat de Alfaro, L.: Formal verification of probabilistic systems. Ph.D. thesis, Stanford University (1997) de Alfaro, L.: Formal verification of probabilistic systems. Ph.D. thesis, Stanford University (1997)
[DFP04]
Zurück zum Zitat Duflot, M., Fribourg, L., Picaronny, C.: Randomized dining philosophers without fairness assumption. Distrib. Comput. 17(1), 65–76 (2004)CrossRef Duflot, M., Fribourg, L., Picaronny, C.: Randomized dining philosophers without fairness assumption. Distrib. Comput. 17(1), 65–76 (2004)CrossRef
[Gir14]
Zurück zum Zitat Giro, S.: Optimal schedulers vs optimal bases: an approach for efficient exact solving of Markov decision processes. Theor. Comput. Sci. 538, 70–83 (2014)MathSciNetCrossRefMATH Giro, S.: Optimal schedulers vs optimal bases: an approach for efficient exact solving of Markov decision processes. Theor. Comput. Sci. 538, 70–83 (2014)MathSciNetCrossRefMATH
[HM14]
Zurück zum Zitat Haddad, S., Monmege, B.: Reachability in MDPs: refining convergence of value iteration. In: Ouaknine, J., Potapov, I., Worrell, J. (eds.) RP 2014. LNCS, vol. 8762, pp. 125–137. Springer, Cham (2014). doi:10.1007/978-3-319-11439-2_10 Haddad, S., Monmege, B.: Reachability in MDPs: refining convergence of value iteration. In: Ouaknine, J., Potapov, I., Worrell, J. (eds.) RP 2014. LNCS, vol. 8762, pp. 125–137. Springer, Cham (2014). doi:10.​1007/​978-3-319-11439-2_​10
[How60]
Zurück zum Zitat Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, New York, London, Cambridge (1960)MATH Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, New York, London, Cambridge (1960)MATH
[KNP11]
Zurück zum Zitat Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). doi:10.1007/978-3-642-22110-1_47 CrossRef Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). doi:10.​1007/​978-3-642-22110-1_​47 CrossRef
[KNPS06]
Zurück zum Zitat Kwiatkowska, M., Norman, G., Parker, D., Sproston, J.: Performance analysis of probabilistic timed automata using digital clocks. Formal Methods Syst. Des. 29, 33–78 (2006)CrossRefMATH Kwiatkowska, M., Norman, G., Parker, D., Sproston, J.: Performance analysis of probabilistic timed automata using digital clocks. Formal Methods Syst. Des. 29, 33–78 (2006)CrossRefMATH
[KNPV09]
Zurück zum Zitat Kwiatkowska, M., Norman, G., Parker, D., Vigliotti, M.G.: Probabilistic mobile ambients. Theoret. Comput. Sci. 410(12–13), 1272–1303 (2009)MathSciNetCrossRefMATH Kwiatkowska, M., Norman, G., Parker, D., Vigliotti, M.G.: Probabilistic mobile ambients. Theoret. Comput. Sci. 410(12–13), 1272–1303 (2009)MathSciNetCrossRefMATH
[KPC12]
Zurück zum Zitat Komuravelli, A., Păsăreanu, C.S., Clarke, E.M.: Assume-guarantee abstraction refinement for probabilistic systems. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 310–326. Springer, Heidelberg (2012). doi:10.1007/978-3-642-31424-7_25 CrossRef Komuravelli, A., Păsăreanu, C.S., Clarke, E.M.: Assume-guarantee abstraction refinement for probabilistic systems. In: Madhusudan, P., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 310–326. Springer, Heidelberg (2012). doi:10.​1007/​978-3-642-31424-7_​25 CrossRef
[MLG05]
Zurück zum Zitat McMahan, H.B., Likhachev, M., Gordon, G.J.: Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees. In: ICML, pp. 569–576 (2005) McMahan, H.B., Likhachev, M., Gordon, G.J.: Bounded real-time dynamic programming: RTDP with monotone upper bounds and performance guarantees. In: ICML, pp. 569–576 (2005)
[MM02]
Zurück zum Zitat McIver, A.K., Morgan, C.C.: Games, probability, and the quantitative \(\mu \)-calculus qM \(\mu \). In: Baaz, M., Voronkov, A. (eds.) LPAR 2002. LNCS, vol. 2514, pp. 292–310. Springer, Heidelberg (2002). doi:10.1007/3-540-36078-6_20 CrossRef McIver, A.K., Morgan, C.C.: Games, probability, and the quantitative \(\mu \)-calculus qM \(\mu \). In: Baaz, M., Voronkov, A. (eds.) LPAR 2002. LNCS, vol. 2514, pp. 292–310. Springer, Heidelberg (2002). doi:10.​1007/​3-540-36078-6_​20 CrossRef
[MM07]
Zurück zum Zitat McIver, A., Morgan, C.: Results on the quantitative \(\mu \)-calculus qMu. ACM Trans. Comput. Logic 8(1), 3 (2007)CrossRef McIver, A., Morgan, C.: Results on the quantitative \(\mu \)-calculus qMu. ACM Trans. Comput. Logic 8(1), 3 (2007)CrossRef
[PGT03]
Zurück zum Zitat Pineau, J., Gordon, G.J., Thrun, S.: Point-based value iteration: an anytime algorithm for POMDPs. In: IJCAI, pp. 1025–1032 (2003) Pineau, J., Gordon, G.J., Thrun, S.: Point-based value iteration: an anytime algorithm for POMDPs. In: IJCAI, pp. 1025–1032 (2003)
[Put94]
Zurück zum Zitat Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (1994)CrossRefMATH Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (1994)CrossRefMATH
[Rab82]
Zurück zum Zitat Michael, O.: N-Process mutual exclusion with bounded waiting by 4 Log2N-valued shared variable. J. Comput. Syst. Sci. 25(1), 66–75 (1982)CrossRefMATH Michael, O.: N-Process mutual exclusion with bounded waiting by 4 Log2N-valued shared variable. J. Comput. Syst. Sci. 25(1), 66–75 (1982)CrossRefMATH
[Seg95]
Zurück zum Zitat Segala, R.: Modelling and verification of randomized distributed real time systems. Ph.D. thesis, Massachusetts Institute of Technology (1995) Segala, R.: Modelling and verification of randomized distributed real time systems. Ph.D. thesis, Massachusetts Institute of Technology (1995)
[Vei66]
Zurück zum Zitat Veinott, A.F.: On finding optimal policies in discrete dynamic programming with no discounting. Ann. Math. Statist. 37(5), 1284–1294 (1966)MathSciNetCrossRefMATH Veinott, A.F.: On finding optimal policies in discrete dynamic programming with no discounting. Ann. Math. Statist. 37(5), 1284–1294 (1966)MathSciNetCrossRefMATH
[WBB+10]
Zurück zum Zitat Wimmer, R., Braitling, B., Becker, B., Hahn, E.M., Crouzen, P., Hermanns, H., Dhama, A., Theel, O.E.: Symblicit calculation of long-run averages for concurrent probabilistic systems. In: QEST, pp. 27–36 (2010) Wimmer, R., Braitling, B., Becker, B., Hahn, E.M., Crouzen, P., Hermanns, H., Dhama, A., Theel, O.E.: Symblicit calculation of long-run averages for concurrent probabilistic systems. In: QEST, pp. 27–36 (2010)
Metadaten
Titel
Value Iteration for Long-Run Average Reward in Markov Decision Processes
verfasst von
Pranav Ashok
Krishnendu Chatterjee
Przemysław Daca
Jan Křetínský
Tobias Meggendorfer
Copyright-Jahr
2017
DOI
https://doi.org/10.1007/978-3-319-63387-9_10

Premium Partner