Skip to main content
Erschienen in: 4OR 1/2017

11.06.2016 | Research paper

Finite approximation for finite-horizon continuous-time Markov decision processes

verfasst von: Qingda Wei

Erschienen in: 4OR | Ausgabe 1/2017

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper we study the continuous-time Markov decision processes with a denumerable state space, a Borel action space, and unbounded transition and cost rates. The optimality criterion to be considered is the finite-horizon expected total cost criterion. Under the suitable conditions, we propose a finite approximation for the approximate computations of an optimal policy and the value function, and obtain the corresponding error estimations. Furthermore, our main results are illustrated with a controlled birth and death system.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Bäuerle N, Rieder U (2011) Markov decision processes with applications to finance. Springer, BerlinCrossRef Bäuerle N, Rieder U (2011) Markov decision processes with applications to finance. Springer, BerlinCrossRef
Zurück zum Zitat Gihman II, Skohorod AV (1979) Controlled stochastic processes. Springer, BerlinCrossRef Gihman II, Skohorod AV (1979) Controlled stochastic processes. Springer, BerlinCrossRef
Zurück zum Zitat Guo XP, Hernández-Lerma O (2009) Continuous-time Markov decision processes: theory and applications. Springer, BerlinCrossRef Guo XP, Hernández-Lerma O (2009) Continuous-time Markov decision processes: theory and applications. Springer, BerlinCrossRef
Zurück zum Zitat Guo XP, Huang XX, Huang YH (2015) Finite horizon optimality for continuous-time Markov decision processes with unbounded transition rates. Adv Appl Probab 47:1064–1087CrossRef Guo XP, Huang XX, Huang YH (2015) Finite horizon optimality for continuous-time Markov decision processes with unbounded transition rates. Adv Appl Probab 47:1064–1087CrossRef
Zurück zum Zitat Guo XP, Zhang WZ (2014) Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints. Euro J Oper Res 238:486–496CrossRef Guo XP, Zhang WZ (2014) Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints. Euro J Oper Res 238:486–496CrossRef
Zurück zum Zitat Kitaev MY, Rykov VV (1995) Controlled queueing systems. CRC Press, Boca Raton Kitaev MY, Rykov VV (1995) Controlled queueing systems. CRC Press, Boca Raton
Zurück zum Zitat Miller BL (1968) Finite state continuous time Markov decision processes with finite planning horizon. SIAM J Control 6:266–280CrossRef Miller BL (1968) Finite state continuous time Markov decision processes with finite planning horizon. SIAM J Control 6:266–280CrossRef
Zurück zum Zitat Pliska SR (1975) Controlled jump processes. Stoch Process Appl 3:259–282CrossRef Pliska SR (1975) Controlled jump processes. Stoch Process Appl 3:259–282CrossRef
Zurück zum Zitat Puterman ML (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, New YorkCrossRef Puterman ML (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, New YorkCrossRef
Zurück zum Zitat van Dijk NM (1988) On the finite horizon Bellman equation for controlled Markov jump models with unbounded characteristics: existence and approximation. Stoch Process Appl 28:141–157CrossRef van Dijk NM (1988) On the finite horizon Bellman equation for controlled Markov jump models with unbounded characteristics: existence and approximation. Stoch Process Appl 28:141–157CrossRef
Zurück zum Zitat van Dijk NM (1989) A note on constructing \(\varepsilon \)-optimal policies for controlled Markov jump models with unbounded characteristics. Stochastics 27:51–58 van Dijk NM (1989) A note on constructing \(\varepsilon \)-optimal policies for controlled Markov jump models with unbounded characteristics. Stochastics 27:51–58
Zurück zum Zitat Wei QD, Chen X (2014) Strong average optimality criterion for continuous-time Markov decision processes. Kybernetika 50:950–977 Wei QD, Chen X (2014) Strong average optimality criterion for continuous-time Markov decision processes. Kybernetika 50:950–977
Zurück zum Zitat Yushkevich AA (1977) Controlled Markov models with countable state and continuous time. Theory Probab Appl 22:215–235CrossRef Yushkevich AA (1977) Controlled Markov models with countable state and continuous time. Theory Probab Appl 22:215–235CrossRef
Metadaten
Titel
Finite approximation for finite-horizon continuous-time Markov decision processes
verfasst von
Qingda Wei
Publikationsdatum
11.06.2016
Verlag
Springer Berlin Heidelberg
Erschienen in
4OR / Ausgabe 1/2017
Print ISSN: 1619-4500
Elektronische ISSN: 1614-2411
DOI
https://doi.org/10.1007/s10288-016-0321-3

Weitere Artikel der Ausgabe 1/2017

4OR 1/2017 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.