Skip to main content
Log in

Markov games with incomplete information

  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

We consider zero-sum Markov games with incomplete information. Here, the second player is never informed about the current state of the underlying Markov chain. The existence of a value and of optimal strategies for both players is shown. In particular, we present finite algorithms for computing optimal strategies for the informed and uninformed player. The algorithms are based on linear programming results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Balinski M (1961) An algorithm for finding all vertices of convex polyhedral sets. J SIAM 9:72–88

    Google Scholar 

  • Domansky V, Kreps V (1994) “Eventually revealing” repeated games with incomplete information. Internat. J. Game Theory 23:89–99

    Google Scholar 

  • Domansky V, Kreps V (1995) Repeated games and multinomial distribution. Mathem. Meth. Operat. Res. 42:275–293

    Google Scholar 

  • Filar J, Vrieze K (1997) Competitive Markov decision processes. Springer-Verlag, New York

    Google Scholar 

  • Heuer M (1991) Optimal strategies for the uninformed player. Internat. J. Game Theory 20:33–51

    Google Scholar 

  • Mattheiss T (1973) An algorithm for determining irrelevant constraints and all vertices in systems of linear inequalities. Operations Research 21:247–260

    Google Scholar 

  • Melolidakis C (1984) On stochastic games with lack of information on one side. PhD Thesis, University of California, Los Angeles

    Google Scholar 

  • Melolidakis C (1989) On stochastic games with lack of information on one side. Internat. J. Game Theory 18:1–29

    Google Scholar 

  • Melolidakis C (1991) Stochastic games with lack of information on one side and positive stop probabilities. In: Raghavan TES et al. (eds): Stochastic games and related topics, Kluwer Academic Publishers, pp. 113–126

    Google Scholar 

  • Mertens JF, Sorin S, Zamir S (1994) Repeated games. Core Discussion Paper 9420, 9421, 9422, Université Catholique de Louvain

  • Nowak AS (1986) Semicontinuous non-stationary stochastic games. J Math. Anal. Appl. 117:84–89

    Google Scholar 

  • Ponssard JP, Sorin S (1980) The LP formulation of finite zero-sum games with incomplete information. Internat. J Game Theory 9:99–105

    Google Scholar 

  • Rieder U (1975) Bayesian dynamic programming. Adv. Appl. Prob. 7:330–348

    Google Scholar 

  • Rieder U (1978) On semi-continuous dynamic games. Technical Report, Universität Karlsruhe

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Krausz, A., Rieder, U. Markov games with incomplete information. Mathematical Methods of Operations Research 46, 263–279 (1997). https://doi.org/10.1007/BF01217695

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01217695

Key words

Navigation